scispace - formally typeset
Search or ask a question

Showing papers on "Bayesian inference published in 1998"


Journal ArticleDOI
TL;DR: In this paper, a Bayesian analysis of linear regression models that can account for skewed error distributions with fat tails is presented. But the authors do not consider whether the tail behavior is affected by skewness.
Abstract: We consider a Bayesian analysis of linear regression models that can account for skewed error distributions with fat tails The latter two features are often observed characteristics of empirical datasets, and we formally incorporate them in the inferential process A general procedure for introducing skewness into symmetric distributions is first proposed Even though this allows for a great deal of flexibility in distributional shape, tail behavior is not affected Applying this skewness procedure to a Student t distribution, we generate a “skewed Student” distribution, which displays both flexible tails and possible skewness, each entirely controlled by a separate scalar parameter The linear regression model with a skewed Student error term is the main focus of the article We first characterize existence of the posterior distribution and its moments, using standard improper priors and allowing for inference on skewness and tail parameters For posterior inference with this model, we suggest

829 citations


Journal ArticleDOI
TL;DR: The Metropolis algorithm provides a quantum advance in the capability to deal with parameter uncertainty in hydrologic models by using a random walk that adapts to the true probability distribution describing parameter uncertainty.

809 citations


Journal ArticleDOI
TL;DR: Planar Cox processes directed by a log Gaussian intensity process are investigated in the univariate and multivariate cases and the appealing properties of such models are demonstrated theoretically as well as through data examples and simulations.
Abstract: Planar Cox processes directed by a log Gaussian intensity process are investigated in the univariate and multivariate cases. The appealing properties of such models are demonstrated theoretically as well as through data examples and simulations. In particular, the first, second and third-order properties are studied and utilized in the statistical analysis of clustered point patterns. Also empirical Bayesian inference for the underlying intensity surface is considered.

787 citations


Proceedings Article
24 Jul 1998
TL;DR: This paper extends Structural EM to deal directly with Bayesian model selection and proves the convergence of the resulting algorithm and shows how to apply it for learning a large class of probabilistic models, including Bayesian networks and some variants thereof.
Abstract: In recent years there has been a flurry of works on learning Bayesian networks from data. One of the hard problems in this area is how to effectively learn the structure of a belief network from incomplete data--that is, in the presence of missing values or hidden variables. In a recent paper, I introduced an algorithm called Structural EM that combines the standard Expectation Maximization (EM) algorithm, which optimizes parameters, with structure search for model selection. That algorithm learns networks based on penalized likelihood scores, which include the BIC/MDL score and various approximations to the Bayesian score. In this paper, I extend Structural EM to deal directly with Bayesian model selection. I prove the convergence of the resulting algorithm and show how to apply it for learning a large class of probabilistic models, including Bayesian networks and some variants thereof.

637 citations


Proceedings ArticleDOI
David McAllester1
24 Jul 1998
TL;DR: The PAC-Bayesian theorems given here apply to an arbitrary prior measure on an arbitrary concept space and provide an alternative to the use of VC dimension in proving PAC bounds for parameterized concepts.
Abstract: This paper gives PAC guarantees for “Bayesian” algorithms—algorithms that optimize risk minimization expressions involving a prior probability and a likelihood for the training data. PAC-Bayesian algorithms are motivated by a desire to provide an informative prior encoding information about the expected experimental setting but still having PAC performance guarantees over all IID settings. The PAC-Bayesian theorems given here apply to an arbitrary prior measure on an arbitrary concept space. These theorems provide an alternative to the use of VC dimension in proving PAC bounds for parameterized concepts.

549 citations


Journal ArticleDOI
David Higdon1
TL;DR: A Bayesian approach is taken here which relies on Markov chain Monte Carlo for exploring the posterior distribution of the convolution kernel in a process-convolution approach for space-time modelling.
Abstract: This paper develops a process-convolution approach for space-time modelling. With this approach, a dependent process is constructed by convolving a simple, perhaps independent, process. Since the convolution kernel may evolve over space and time, this approach lends itself to specifying models with non-stationary dependence structure. The model is motivated by an application from oceanography: estimation of the mean temperature field in the North Atlantic Ocean as a function of spatial location and time. The large amount of this data poses some difficulties; hence computational considerations weigh heavily in some modelling aspects. A Bayesian approach is taken here which relies on Markov chain Monte Carlo for exploring the posterior distribution.

450 citations


Journal ArticleDOI
TL;DR: A method of estimating a variety of curves by a sequence of piecewise polynomials, motivated by a Bayesian model and an appropriate summary of the resulting posterior distribution, is proposed, successful in giving good estimates for ‘smooth’ functions.
Abstract: Summary. A method of estimating a variety of curves by a sequence of piecewise polynomials is proposed, motivated by a Bayesian model and an appropriate summary of the resulting posterior distribution. A joint distribution is set up over both the number and the position of the knots defining the piecewise polynomials. Throughout we use reversible jump Markov chain Monte Carlo methods to compute the posteriors. The methodology has been successful in giving good estimates for 'smooth' functions (i.e, continuous and differentiable) as well as functions which are not differentiable, and perhaps not even continuous, at a finite number of points. The methodology is extended to deal with generalized additive models.

424 citations



Journal ArticleDOI
TL;DR: A stochastic search form of classification and regression tree (CART) analysis is proposed, motivated by a Bayesian model and an approximation to a probability distribution over the space of possible trees is explored.
Abstract: A stochastic search form of classification and regression tree (CART) analysis (Breiman et al., 1984) is proposed, motivated by a Bayesian model. An approximation to a probability distribution over the space of possible trees is explored using reversible jump Markov chain Monte Carlo methods (Green, 1995).

325 citations


Journal ArticleDOI
Brani Vidakovic1
TL;DR: A wavelet shrinkage by coherent Bayesian inference in the wavelet domain is proposed and the methods are tested on standard Donoho-Johnstone test functions.
Abstract: Wavelet shrinkage, the method proposed by the seminal work of Donoho and Johnstone is a disarmingly simple and efficient way of denoising data. Shrinking wavelet coefficients was proposed from several optimality criteria. In this article a wavelet shrinkage by coherent Bayesian inference in the wavelet domain is proposed. The methods are tested on standard Donoho-Johnstone test functions.

320 citations


Proceedings Article
Christopher M. Bishop1
01 Dec 1998
TL;DR: This paper uses probabilistic reformulation as the basis for a Bayesian treatment of PCA to show that effective dimensionality of the latent space (equivalent to the number of retained principal components) can be determined automatically as part of the Bayesian inference procedure.
Abstract: The technique of principal component analysis (PCA) has recently been expressed as the maximum likelihood solution for a generative latent variable model. In this paper we use this probabilistic reformulation as the basis for a Bayesian treatment of PCA. Our key result is that effective dimensionality of the latent space (equivalent to the number of retained principal components) can be determined automatically as part of the Bayesian inference procedure. An important application of this framework is to mixtures of probabilistic PCA models, in which each component can determine its own effective complexity.

Journal ArticleDOI
TL;DR: In this article, the authors consider inference for a semiparametric stochastic mixed model for longitudinal data and derive maximum penalized likelihood estimators of the regression coefficients and the nonparametric function.
Abstract: We consider inference for a semiparametric stochastic mixed model for longitudinal data. This model uses parametric fixed effects to represent the covariate effects and an arbitrary smooth function to model the time effect and accounts for the within-subject correlation using random effects and a stationary or nonstationary stochastic process. We derive maximum penalized likelihood estimators of the regression coefficients and the nonparametric function. The resulting estimator of the nonparametric function is a smoothing spline. We propose and compare frequentist inference and Bayesian inference on these model components. We use restricted maximum likelihood to estimate the smoothing parameter and the variance components simultaneously. We show that estimation of all model components of interest can proceed by fitting a modified linear mixed model. We illustrate the proposed method by analyzing a hormone dataset and evaluate its performance through simulations.

Proceedings Article
24 Aug 1998
TL;DR: In this paper, the authors consider the problem of determining casual relationships, instead of mere associations, when mining market basket data and identify some problems with the direct application of Bayesian learning ideas to mining large databases, concerning both the scalability of algorithms and the appropriateness of the statistical techniques.
Abstract: Mining for association rules in market basket data has proved a fruitful area of research. Measures such as conditional probability (confidence) and correlation have been used to infer rules of the form “the existence of item A implies the existence of item B.” However, such rules indicate only a statistical relationship between A and B. They do not specify the nature of the relationship: whether the presence of A causes the presence of B, or the converse, or some other attribute or phenomenon causes both to appear together. In applications, knowing such causal relationships is extremely useful for enhancing understanding and effecting change. While distinguishing causality from correlation is a truly difficult problem, recent work in statistics and Bayesian learning provide some avenues of attack. In these fields, the goal has generally been to learn complete causal models, which are essentially impossible to learn in large-scale data mining applications with a large number of variables. In this paper, we consider the problem of determining casual relationships, instead of mere associations, when mining market basket data. We identify some problems with the direct application of Bayesian learning ideas to mining large databases, concerning both the scalability of algorithms and the appropriateness of the statistical techniques, and introduce some initial ideas for dealing with these problems. We present experimental results from applying our algorithms on several large, real-world data sets. The results indicate that the approach proposed here is both computationally feasible and successful in identifying interesting causal structures. An interesting outcome is that it is perhaps easier to infer the lack of causality than to infer causality, information that is useful in preventing erroneous decision making.

Journal ArticleDOI
TL;DR: The current article develops the theoretical framework of variants of the origin-destination flow problem and introduces Bayesian approaches to analysis and inference.
Abstract: We study Bayesian models and methods for analysing network traffic counts in problems of inference about the traffic intensity between directed pairs of origins and destinations in networks. This is a class of problems very recently discussed by Vardi in a 1996 JASA article and is of interest in both communication and transportation network studies. The current article develops the theoretical framework of variants of the origin-destination flow problem and introduces Bayesian approaches to analysis and inference. In the first, the so-called fixed routing problem, traffic or messages pass between nodes in a network, with each message originating at a specific source node, and ultimately moving through the network to a predetermined destination node. All nodes are candidate origin and destination points. The framework assumes no travel time complications, considering only the number of messages passing between pairs of nodes in a specified time interval. The route count, or route flow, problem is ...

Book
28 May 1998
TL;DR: This paper presents a meta-modelling framework for Bayesian Survival Analysis using Semiparametric Bayesian Methods for Random Effects Models, and some examples show how this framework can be applied to Dirichlet Processes.
Abstract: I Dirichlet and Related Processes.- 1 Computing Nonparametric Hierarchical Models.- 1.1 Introduction.- 1.2 Notation and Perspectives.- 1.3 Posterior Sampling in Dirichlet Process Mixtures.- 1.4 An Example with Poisson-Gamma Structure.- 1.5 An Example with Normal Structure.- 2 Computational Methods for Mixture of Dirichlet Process Models.- 2.1 Introduction.- 2.2 The Basic Algorithm.- 2.3 More Efficient Algorithms.- 2.4 Non-Conjugate Models.- 2.5 Discussion.- 3 Nonparametric Bayes Methods Using Predictive Updating.- 3.1 Introduction.- 3.2 Onn=1.- 3.3 A Recursive Algorithm.- 3.4 Interval Censoring.- 3.5 Censoring Example.- 3.6 Mixing Example.- 3.7 Onn= 2.- 3.8 Concluding Remarks.- 4 Dynamic Display of Changing Posterior in Bayesian Survival Analysis.- 4.1 Introduction and Summary.- 4.2 A Gibbs Sampler for Censored Data.- 4.3 Proof of Proposition 1.- 4.4 Importance Sampling.- 4.5 The Environment for Dynamic Graphics.- 4.6 Appendix: Completion of the Proof of Proposition 1.- 5 Semiparametric Bayesian Methods for Random Effects Models.- 5.1 Introduction.- 5.2 Normal Linear Random Effects Models.- 5.3 DP priors in the Normal Linear Random Effects Model.- 5.4 Generalized Linear Mixed Models.- 5.5 DP priors in the Generalized Linear Mixed Model.- 5.6 Applications.- 5.7 Discussion.- 6 Nonparametric Bayesian Group Sequential Design.- 6.1 Introduction.- 6.2 The DP Mixing Approach Applied to the Group Sequential Framework.- 6.3 Model Fitting Techniques.- 6.4 Implementation of the Design.- 6.5 Examples.- II Modeling Random Functions.- 7 Wavelet-Based Nonparametric Bayes Methods.- 7.1 Introduction.- 7.2 Discrete Wavelet Transformations.- 7.3 Bayes and Wavelets.- 7.4 Other Problems.- 8 Nonparametric Estimation of Irregular Functions with Independent or Autocorrelated Errors.- 8.1 Introduction.- 8.2 Nonparametric Regression for Independent Errors.- 8.3 Nonparametric Regression for Data with Autocorrelated Errors.- 9 Feedforward Neural Networks for Nonparametric Regression.- 9.1 Introduction.- 9.2 Feed Forward Neural Networks as Nonparametric Regression Models.- 9.3 Variable Architecture FFNNs.- 9.4 Posterior Inference with the FFNN Model.- 9.5 Examples.- 9.6 Discussion.- III Levy and Related Processes.- 10 Survival Analysis Using Semiparametric Bayesian Methods.- D. Sinha.- D. Dey.- 10.1 Introduction.- 10.2 Models.- 10.3 Prior Processes.- 10.4 Bayesian Analysis.- 10.5 Further Readings.- 11 Bayesian Nonparametric and Covariate Analysis of Failure Time Data.- 11.1 Introduction.- 11.2 Cox Model with Beta Process Prior.- 11.3 The Computational Model.- 11.4 Illustrative Analysis.- 11.5 Conclusion.- 12 Simulation of Levy Random Fields.- 12.1 Introduction and Overview.- 12.2 Increasing Independent-Increment Processes: A New Look at an Old Idea.- 12.3 Example: Gamma Variates, Processes, and Fields.- 12.4 Inhomogeneous Levy Random Fields.- 12.5 Comparisons with Other Methods.- 12.6 Conclusions.- 13 Sampling Methods for Bayesian Nonparametric Inference Involving Stochastic Processes.- 13.1 Introduction.- 13.2 Neutral to the Right Processes.- 13.3 Mixtures of Dirichlet Processes.- 13.4 Conclusions.- 14 Curve and Surface Estimation Using Dynamic Step Functions.- 14.1 Introduction.- 14.2 Some Statistical Problems.- 14.3 Some Spatial Statistics.- 14.4 Prototype Prior.- 14.5 Posterior Inference.- 14.6 Example in Intensity Estimation.- 14.7 Discussion.- IV Prior Elicitation and Asymptotic Properties 15 Prior Elicitation for Semiparametric Bayesian Survival Analysis.- 15.1 Introduction.- 15.2 The Method.- 15.3 Sampling from the Joint Posterior Distribution of(ss? ao).- 15.4 Applications to Variable Selection.- 15.5 Myeloma Data.- 15.6 Discussion.- 16 Asymptotic Properties of Nonparametric Bayesian Procedures.- 16.1 Introduction.- 16.2 Frequentist or Bayesian Asymptotics?.- 16.3 Consistency.- 16.4 Consistency in Bellinger Distance.- 16.5 Other Asymptotic Properties.- 16.6 The Robins-Ritov Paradox.- 16.7 Conclusion.- V Case Studies.- 17 Modeling Travel Demand in Portland, Oregon.- 17.1 Introduction.- 17.2 The Data.- 17.3 Poisson/Gamma Random Field Models.- 17.4 The Computational Scheme.- 17.5 Posterior Analysis.- 17.6 Discussion.- 18 Semiparametric PK/PD Models.- 18.1 Introduction.- 18.2 A Semiparametric Population Model.- 18.3 Meta-analysis Over Related Studies.- 18.4 Discussion.- 19 A Bayesian Model for Fatigue Crack Growth.- 19.1 Introduction.- 19.2 The Model.- 19.3 A Markov Chain Monte Carlo Method.- 19.4 An Example: Growth of Crack Lengths.- 19.5 Discussion.- 20 A Semiparametric Model for Labor Earnings Dynamics.- 20.1 Introduction.- 20.2 Longitudinal Earnings Data.- 20.3 A Parametric Random Effects Model.- 20.4 A Semiparametric Model.- 20.5 Predictive Distributions.- 20.6 Conclusion.

Book
01 Jan 1998
TL;DR: The author’s views on the development of time series models, information theory and an extension of the maximum likelihood princilple, and Bayesian approach to outlier detection are reviewed.
Abstract: Foreword.- A Conversation with Hirotugu Akaike.- List of Publications of Hirotugu Akaike.- Papers.- 1. Precursors.- 1. On a zero-one process and some of its applications.- 2. On a successive transformation of probability distribution and its application to the analysis of the optimum gradient method.- 2. Frequency Domain Time Series Analysis.- 1. Effect of timing-error on the power spectrum of sampled-data.- 2. On a limiting process which asymptotically produces f-2 spectral density.- 3. On the statistical estimation of frequency response function.- 3. Time Domain Time Series Analysis.- 1. On the use of a linear model for the identification of feedback systems.- 2. Fitting autoregressive models for prediction.- 3. Statistical predictor identification.- 4. Autoregressive model fitting for control.- 5. Statistical approach to computer control of cement rotary kilns.- 6. Statistical identification for optimal control of supercritical thermal power plants.- 4. AIC and Parametrization.- 1. Information theory and an extension of the maximum likelihood princilple.- 2. A new look at the statistical model identification.- 3. Markovian representation of stochastic processes and its application to the analysis of autoregressive moving average processes.- 4. Covariance matrix computation of the state variable of a stationary Gaussian process.- 5. Analysis of cross classified data by AIC.- 6. On linear intensity models for mixed doubly stochastic Poisson and self-exciting point processes.- 5. Bayesian Approach.- 1. A Baysian analysis of the minimum AIC procedure.- 2. A new look at the Bayes procedure.- 3. On the likelihood of a time series model.- 4. Likelihood and the Bayes procedure.- 5. Seasonal adjustment by a Bayesian modeling.- 6. A quasi Bayesian approach to outlier detection.- 7. On the fallacy of the likelihood principle.- 8. A Bayesian apporach to the analysis of earth tides.- 9. Factor analysis and AIC.- 6. General Views on Statistics.- 1. Prediction and entropy.- 2. Experiences on the development of time series models.- 3. Implications of informational point of view on the development of statistical science.

MonographDOI
13 Sep 1998-Mind
TL;DR: This paper presents a meta-analysis of the design inference ofcomplexity theory and its applications in the context of discrete-time reinforcement learning.
Abstract: Preface Acknowledgments 1. Introduction 2. Overview of the design inference 3. Probability theory 4. Complexity theory 5. Specification 6. Small probability 7. Epilogue Notes References.

Journal ArticleDOI
TL;DR: Several extensions of the generative topographic mapping model are reported, including an incremental version of the EM algorithm for estimating the model parameters, the use of local subspace models, extensions to mixed discrete and continuous data, semi-linear models which permit the useof high-dimensional manifolds whilst avoiding computational intractability, Bayesian inference applied to hyper-parameters, and an alternative framework for the GTM based on Gaussian processes.

Journal ArticleDOI
TL;DR: In this article, the Gibbs sampler is combined with a unidimensional deterministic integration rule applied to each coordinate of the posterior density, and the full conditional densities are evaluated and inverted numerically to obtain random draws of the joint posterior.
Abstract: Summary This paper explains how the Gibbs sampler can be used to perform Bayesian inference on GARCH models. Although the Gibbs sampler is usually based on the analytical knowledge of the full conditional posterior densities, such knowledge is not available in regression models with GARCH errors. We show that the Gibbs sampler can be combined with a unidimensional deterministic integration rule applied to each coordinate of the posterior density. The full conditional densities are evaluated and inverted numerically to obtain random draws of the joint posterior. The method is shown to be feasible and competitive compared with importance sampling and the Metropolis‐Hastings algorithm. It is applied to estimate an asymmetric Student‐GARCH model for the return on a stock exchange index, and to compute predictive option prices on the index. We prove, moreover, that a flat prior on the degrees of freedom parameter leads to an improper posterior density.

Journal ArticleDOI
TL;DR: This article showed that human performance can be systematically improved or degraded by varying whether a correct solution requires one to compute hit and false-alarm rates over natural units, such as whole objects, as opposed to inseparable aspects, views, and other parsings that violate evolved principles of object construal.
Abstract: Evolutionary approaches to judgment under uncertainty have led to new data showing that untutored subject reliably produce judgments that conform to may principles of probability theory when (a) they are asked to compute a frequency instead of the probability of a single event, and (b) the relevant information is expressed as frequencies. But are the frequencycomputation systems implicated in these experiments better at operating over some kinds of input than others? Principles of object perception and principles of adaptive design led us to propose the individuation hypothesis : that these systems are designed to produce wellcalibrated statistical inferences when they operate over representations of “whole” objects, events, and locations. In a series of experiments on Bayesian reasoning, we show that human performance can be systematically improved or degraded by varying whether a correct solution requires one to compute hit and false-alarm rates over “natural” units, such as whole objects, as opposed to inseparable aspects, views, and other parsings that violate evolved principles of object construal. The ability to make well-calibrated probability judgments depends, at a very basic level, on the ability to count. The ability to count depends on the ability to individuate the world: to see it as composed of discrete entities. Research on how people individuate the world is, therefore, relevant to understanding the statistical inference mechanisms that govern how people make judgments under uncertainty. Computational machinery whose architecture is designed to parse the world and make inferences about it is under intensive study in many branches of psychology: perception, psychophysics, cognitive development, cognitive neurosci

Proceedings Article
01 Dec 1998
TL;DR: A principled Bayesian model is proposed based on the assumption that the examples are a random sample from the concept to be learned, which gives precise fits to human behavior on this simple task and provides qualitative insights into more complex, realistic cases of concept learning.
Abstract: I consider the problem of learning concepts from small numbers of positive examples, a feat which humans perform routinely but which computers are rarely capable of. Bridging machine learning and cognitive science perspectives, I present both theoretical analysis and an empirical study with human subjects for the simple task oflearning concepts corresponding to axis-aligned rectangles in a multidimensional feature space. Existing learning models, when applied to this task, cannot explain how subjects generalize from only a few examples of the concept. I propose a principled Bayesian model based on the assumption that the examples are a random sample from the concept to be learned. The model gives precise fits to human behavior on this simple task and provides qualitative insights into more complex, realistic cases of concept learning.

Journal ArticleDOI
TL;DR: In this paper, a new methodological approach for carrying out Bayesian inference about dynamic models for exponential family observations is presented, which is simulation based and involves the use of Markov chain Monte Carlo techniques.
Abstract: SUMMARY This paper presents a new methodological approach for carrying out Bayesian inference about dynamic models for exponential family observations. The approach is simulationbased and involves the use of Markov chain Monte Carlo techniques. A MetropolisHastings algorithm is combined with the Gibbs sampler in repeated use of an adjusted version of normal dynamic linear models. Different alternative schemes based on sampling from the system disturbances and state parameters separately and in a block are derived and compared. The approach is fully Bayesian in obtaining posterior samples with state parameters and unknown hyperparameters. Illustrations with real datasets with sparse counts and missing values are presented. Extensions to accommodate more general evolution forms and distributions for observations and disturbances are outlined.

Journal ArticleDOI
TL;DR: The authors present Gibbs-Markov random field models as a powerful and robust descriptor of spatial information in typical remote-sensing image data as well as examples for both synthetic aperture radar (SAR) and optical data.
Abstract: For pt.I see ibid., p.1431-45 (1998). The authors present Gibbs-Markov random field (GMRF) models as a powerful and robust descriptor of spatial information in typical remote-sensing image data. This class of stochastic image models provides an intuitive description of the image data using parameters of an energy function. For the selection among several nested models and the fit of the model, the authors proceed in two steps of Bayesian inference. This procedure yields the most plausible model and its most likely parameters, which together describe the image content in an optimal way. Its additional application at multiple scales of the image enables the authors to capture all structures being present in complex remote-sensing images. The calculation of the evidences of various models applied to the resulting quasicontinuous image pyramid automatically detects such structures. The authors present examples for both synthetic aperture radar (SAR) and optical data.

Book ChapterDOI
01 Jan 1998
TL;DR: The problem of labelling is discussed and a simple and general clustering-like tool to deal with this problem is proposed and it is proposed that this tool can be applied to clustering networks.
Abstract: A K-component mixture distribution is invariant to permutations of the labels of the components. As a consequence, in a Bayesian framework, the posterior distribution of the mixture parameters has theoretically K! modes. This fact involves possible difficulties when interpreting this posterior distribution. In this paper, we discuss the problem of labelling and we propose a simple and general clustering-like tool to deal with this problem.

Journal ArticleDOI
TL;DR: In this article, the authors presented a Bayesian statistical analysis method for updating earthquake ground motion versus damage relationships in the form of fragility curves and for estimating confidence bounds on these fragility curve.
Abstract: This paper presents a Bayesian statistical analysis method for updating earthquake ground motion versus damage relationships in the form of fragility curves and for estimating confidence bounds on these fragility curves. As building damage data from past earthquakes become increasingly available, these data need to be combined with analytical fragility curves to arrive at more robust fragility curves. The Bayesian method provides a technique for updating these relationships. The parameters of analytical fragility curves are used to estimate the prior distributions of damage in the Bayesian analysis. In an earlier paper, Singhal and Kiremidjian presented a general methodology for the development of analytical motion-damage relationships in terms of seismic fra­ gility curves and damage probability matrices. Data on building damage are used to estimate the likelihood functions. The Bayesian method for fragility analysis is illustrated by using the damage data on reinforced concrete (RC) frame buildings from the January 17, 1994 Northridge, Calif. earthquake. The uncertainties in the fragility curves are represented by confidence bounds around the median fragility curves.

Journal ArticleDOI
TL;DR: This note compares two choices of basis for models parameterized by probabilities, showing that it is possible to improve on the traditional choice, the probability simplex, by transforming to the 'softmax' basis.
Abstract: Maximum a posteriori optimization of parameters and the Laplace approximation for the marginal likelihood are both basis-dependent methods. This note compares two choices of basis for models parameterized by probabilities, showing that it is possible to improve on the traditional choice, the probability simplex, by transforming to the ‘softmax’ basis.

Journal ArticleDOI
TL;DR: A Bayesian model is presented, and a decision-theoretic procedure for finding the optimal doses for each of a series of cohorts of subjects is derived, which is flexible and can easily be conducted using standard statistical software.
Abstract: Early-phase clinical trials, conducted to determine the appropriate dose of an experimental drug to take forward to later trials, are considered. The objective is to find the dose associated with some low probability of an adverse event. A Bayesian model is presented, and a decision-theoretic procedure for finding the optimal doses for each of a series of cohorts of subjects is derived. The procedure is flexible and can easily be conducted using standard statistical software. The results of simulations investigating the properties of the procedure are presented.

Journal ArticleDOI
TL;DR: In this article, the authors define a physical event space over which probabilities are defined, and then introduce an identity criterion, which selects those events that correspond to identity between observed objects, and compute the probability that any two objects are the same, given a stream of observations of many objects.

Journal ArticleDOI
TL;DR: In this paper, the authors used reversible jump Markov chain Monte Carlo (RJCMC) to compute the posterior quantities required for fully Bayesian inference of quantitative trait loci (QTLs).
Abstract: The advent of molecular markers has created a great potential for the understanding of quantitative inheritance in plants as well as in animals. Taking the newly available data into account, biometric models have been constructed for the mapping of quantitative trait loci (QTLs). In current approaches, the lack of knowledge on the number and location of the most important QTLs contributing to a trait is a major problem. In this paper, we utilize reversible jump Markov chain Monte Carlo methodology (Green, 1995, Biometrika 82, 711-732) in order to compute the posterior quantities required for fully Bayesian inference. It yields posterior densities not only for the parameters, given the number of QTL, but also for the number of QTL itself. As an example, the algorithm is applied to simulated data according to a standard design in plant breeding.

Journal ArticleDOI
TL;DR: In this article, the authors consider fixed scan Gibbs and block Gibbs samplers for a Bayesian hierarchical random effects model with proper conjugate priors and show that these sampler chains are geometrically ergodic.