scispace - formally typeset
Search or ask a question
Topic

Maximum a posteriori estimation

About: Maximum a posteriori estimation is a research topic. Over the lifetime, 7486 publications have been published within this topic receiving 222291 citations. The topic is also known as: Maximum a posteriori, MAP & maximum a posteriori probability.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors compare the quality of various types of posterior mode point and interval estimates for the parameters of latent class models with both the classical maximum likelihood estimates and the bootstrap estimates proposed by De Menezes.
Abstract: In maximum likelihood estimation of latent class models, it often occurs that one or more of the parameter estimates are on the boundary of the parameter space; that is, that estimated probabilities equal 0 (or 1) or, equivalently, that logit coefficients equal minus (or plus) infinity. This not only causes numerical problems in the computation of the variance-covariance matrix, it also makes the reported confidence intervals and significance tests for the parameters concerned meaningless. Boundary estimates can, however, easily be prevented by the use of prior distributions for the model parameters, yielding a Bayesian procedure called posterior mode or maximum a posteriori estimation. This approach is implemented in, for example, the Latent GOLD software packages for latent class analysis (Vermunt & Magidson, 2005). Little is, however, known about the quality of posterior mode estimates of the parameters of latent class models, nor about their sensitivity for the choice of the prior distribution. In this paper, we compare the quality of various types of posterior mode point and interval estimates for the parameters of latent class models with both the classical maximum likelihood estimates and the bootstrap estimates proposed by De Menezes (1999). Our simulation study shows that parameter estimates and standard errors obtained by the Bayesian approach are more reliable than the corresponding parameter estimates and standard errors obtained by maximum likelihood and parametric bootstrapping.

70 citations

Journal ArticleDOI
TL;DR: A generalized gamma family of hyperpriors is proposed which allows the impressed currents to be focal and a fast and efficient iterative algorithm, the iterative alternating sequential algorithm for computing maximum a posteriori (MAP) estimates is advocated.
Abstract: Bayesian modeling and analysis of the magnetoencephalography and electroencephalography modalities provide a flexible framework for introducing prior information complementary to the measured data. This prior information is often qualitative in nature, making the translation of the available information into a computational model a challenging task. We propose a generalized gamma family of hyperpriors which allows the impressed currents to be focal and we advocate a fast and efficient iterative algorithm, the iterative alternating sequential algorithm for computing maximum a posteriori (MAP) estimates. Furthermore, we show that for particular choices of the scalar parameters specifying the hyperprior, the algorithm effectively approximates popular regularization strategies such as the minimum current estimate and the minimum support estimate. The connection between priorconditioning and adaptive regularization methods is also pointed out. The posterior densities are explored by means of a Markov chain Monte Carlo strategy suitable for this family of hypermodels. The computed experiments suggest that the known preference of regularization methods for superficial sources over deep sources is a property of the MAP estimators only, and that estimation of the posterior mean in the hierarchical model is better adapted for localizing deep sources.

70 citations

01 Jan 2003
TL;DR: In this paper, the authors describe the steps involved in registering images of different subjects into roughly the same co-ordinate system, where the coordinate system is defined by a template image (or series of images).
Abstract: This chapter describes the steps involved in registering images of different subjects into roughly the same co-ordinate system, where the co-ordinate system is defined by a template image (or series of images). The method only uses up to a few hundred parameters, so can only model global brain shape. It works by estimating the optimum coefficients for a set of bases, by minimizing the sum of squared differences between the template and source image, while simultaneously maximizing the smoothness of the transformation using a maximum a posteriori (MAP) approach.

69 citations

Journal ArticleDOI
TL;DR: For a Markovian sequence of encoder-produced symbols and a discrete memoryless channel, the optimal decoder computes expected values based on a discrete hidden Markov model, using the wellknown forward/backward (F/B) algorithm.
Abstract: In previous work on source coding over noisy channels it was recognized that when the source has memory, there is typically "residual redundancy" between the discrete symbols produced by the encoder, which can be capitalized upon by the decoder to improve the overall quantizer performance. Sayood and Borkenhagen (1991) and Phamdo and Farvardin (see IEEE Trans. Inform. Theory, vol.40, p.186-93, 1994) proposed "detectors" at the decoder which optimize suitable criteria in order to estimate the sequence of transmitted symbols. Phamdo and Farvardin also proposed an instantaneous approximate minimum mean-squared error (IAMMSE) decoder. These methods provide a performance advantage over conventional systems, but the maximum a posteriori (MAP) structure is suboptimal, while the IAMMSE decoder makes limited use of the redundancy. Alternatively, combining aspects of both approaches, we propose a sequence-based approximate MMSE (SAMMSE) decoder. For a Markovian sequence of encoder-produced symbols and a discrete memoryless channel, we approximate the expected distortion at the decoder under the constraint of fixed decoder complexity. For this simplified cost, the optimal decoder computes expected values based on a discrete hidden Markov model, using the wellknown forward/backward (F/B) algorithm. Performance gains for this scheme are demonstrated over previous techniques in quantizing Gauss-Markov sources over a range of noisy channel conditions. Moreover, a constrained delay version is also suggested.

69 citations

Journal ArticleDOI
TL;DR: The utility of optimal sampling strategy coupled with adaptive study design in the determination of individual patient and population pharmacokinetic parameter values is evaluated and it is shown that the four optimal points analyzed with the maximum a posteriori probability Bayesian estimator faithfully reproduced both microscopic and hybrid pharmacokinetics parameter values for individual patients.
Abstract: We have evaluated the utility of optimal sampling strategy coupled with adaptive study design in the determination of individual patient and population pharmacokinetic parameter values. In 9 patients with cystic fibrosis receiving a short (1 minute) infusion of ceftazidime pharmacokinetic parameter values were determined with a nonlinear least-squares estimator analyzing a traditional, geometrically spaced set of 12 postinfusion serum samples drawn over 8 hours. These values were compared with values generated from four sample subsets of the 12 obtained at optimal times and analyzed by nonlinear least-squares estimator, as well as a maximum a posteriori probability Bayesian estimator with prior distributions placed on beta and clearance. The four sampling times were determined according to an adaptive design optimization technique that employs sequential updating of population prior distributions on parameter values. Compared with the 12-point determination, the four optimal points analyzed with the maximum a posteriori probability Bayesian estimator faithfully reproduced both microscopic and hybrid pharmacokinetic parameter values for individual patients and, consequently, also produced accurate measures of population central tendency and dispersion. This has important implications in being able to more efficiently derive target patient population pharmacokinetic information for new drugs. This should also allow generation of better concentration-effect relationships in populations of interest.

69 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
85% related
Image processing
229.9K papers, 3.5M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202364
2022125
2021211
2020244
2019250
2018236