scispace - formally typeset
Search or ask a question
Topic

Maximum a posteriori estimation

About: Maximum a posteriori estimation is a research topic. Over the lifetime, 7486 publications have been published within this topic receiving 222291 citations. The topic is also known as: Maximum a posteriori, MAP & maximum a posteriori probability.


Papers
More filters
Journal ArticleDOI
TL;DR: A framework for maximum a posteriori (MAP) estimation of hidden Markov models (HMM) is presented, and Bayesian learning is shown to serve as a unified approach for a wide range of speech recognition applications.
Abstract: In this paper, a framework for maximum a posteriori (MAP) estimation of hidden Markov models (HMM) is presented. Three key issues of MAP estimation, namely, the choice of prior distribution family, the specification of the parameters of prior densities, and the evaluation of the MAP estimates, are addressed. Using HMM's with Gaussian mixture state observation densities as an example, it is assumed that the prior densities for the HMM parameters can be adequately represented as a product of Dirichlet and normal-Wishart densities. The classical maximum likelihood estimation algorithms, namely, the forward-backward algorithm and the segmental k-means algorithm, are expanded, and MAP estimation formulas are developed. Prior density estimation issues are discussed for two classes of applications/spl minus/parameter smoothing and model adaptation/spl minus/and some experimental results are given illustrating the practical interest of this approach. Because of its adaptive nature, Bayesian learning is shown to serve as a unified approach for a wide range of speech recognition applications. >

2,430 citations

Journal ArticleDOI
TL;DR: A fast algorithm has been developed that utilizes Taylor's theorem and the separable nature of the basis functions, meaning that most of the nonlinear spatial variability between images can be automatically corrected within a few minutes.
Abstract: We describe a comprehensive framework for performing rapid and automatic nonlabel-based nonlinear spatial normalizations. The approach adopted minimizes the residual squared difference between an image and a template of the same modality. In order to reduce the number of parameters to be fitted, the nonlinear warps are described by a linear combination of low spatial frequency basis functions. The objective is to determine the optimum coefficients fur each of the bases by minimizing the sum of squared differences between the image and template, while simultaneously maximizing the smoothness of the transformation using a maximum a posteriori (MAP) approach. Most MAT approaches assume that the variance associated with each voxel is already known and that there is no covariance between neighboring voxels. The approach described here attempts to estimate this variance from the data, and also corrects fur the correlations between neighboring voxels. This makes the same approach suitable for the spatial normalization of both high-quality magnetic resonance images, and low-resolution noisy positron emission tomography images. A fast algorithm has been developed that utilizes Taylor's theorem and the separable nature of the basis functions, meaning that most of the nonlinear spatial variability between images can be automatically corrected within a few minutes. Hum. Brain Mapping 7:254-266, 1999. (C) 1999 Wiley-Liss, Inc.

1,987 citations

Journal ArticleDOI
TL;DR: This work exploits the fact that the marginal density can be expressed as the prior times the likelihood function over the posterior density, so that Bayes factors for model comparisons can be routinely computed as a by-product of the simulation.
Abstract: In the context of Bayes estimation via Gibbs sampling, with or without data augmentation, a simple approach is developed for computing the marginal density of the sample data (marginal likelihood) given parameter draws from the posterior distribution. Consequently, Bayes factors for model comparisons can be routinely computed as a by-product of the simulation. Hitherto, this calculation has proved extremely challenging. Our approach exploits the fact that the marginal density can be expressed as the prior times the likelihood function over the posterior density. This simple identity holds for any parameter value. An estimate of the posterior density is shown to be available if all complete conditional densities used in the Gibbs sampler have closed-form expressions. To improve accuracy, the posterior density is estimated at a high density point, and the numerical standard error of resulting estimate is derived. The ideas are applied to probit regression and finite mixture models.

1,954 citations

Journal ArticleDOI
TL;DR: In many cases, complete-data maximum likelihood estimation is relatively simple when conditional on some function of the parameters being estimated as mentioned in this paper, and convergence is stable, with each iteration increasing the likelihood.
Abstract: Two major reasons for the popularity of the EM algorithm are that its maximum step involves only complete-data maximum likelihood estimation, which is often computationally simple, and that its convergence is stable, with each iteration increasing the likelihood. When the associated complete-data maximum likelihood estimation itself is complicated, EM is less attractive because the M-step is computationally unattractive. In many cases, however, complete-data maximum likelihood estimation is relatively simple when conditional on some function of the parameters being estimated

1,816 citations

Journal ArticleDOI
TL;DR: The EM algorithm is shown to provide a slow but sure way of obtaining maximum likelihood estimates of the parameters of interest in compiling a patient record.
Abstract: In compiling a patient record many facets are subject to errors of measurement. A model is presented which allows individual error-rates to be estimated for polytomous facets even when the patient's "true" response is not available. The EM algorithm is shown to provide a slow but sure way of obtaining maximum likelihood estimates of the parameters of interest. Some preliminary experience is reported and the limitations of the method are described.

1,687 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
85% related
Image processing
229.9K papers, 3.5M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202364
2022125
2021211
2020244
2019250
2018236