scispace - formally typeset
Search or ask a question
Topic

Maximum a posteriori estimation

About: Maximum a posteriori estimation is a research topic. Over the lifetime, 7486 publications have been published within this topic receiving 222291 citations. The topic is also known as: Maximum a posteriori, MAP & maximum a posteriori probability.


Papers
More filters
Journal ArticleDOI
TL;DR: A computational framework is presented where parameters pertaining to a spectral decomposition of the diffusion tensor are estimated using a Rician noise model, and it is demonstrated how the Fisher-information matrix can be used as a generic tool for designing optimal experiments.

84 citations

Journal ArticleDOI
TL;DR: It is proven that the sequence of iterates that is generated by using the expectation maximization algorithm is monotonically increasing in posterior probability, with stable points of the iteration satisfying the necessary maximizer conditions of the maximum a posteriori solution.
Abstract: The three-dimensional image-reconstruction problem solved here for optical-sectioning microscopy is to estimate the fluorescence intensity λ(x), where x ∈ ℛ3, given a series of Poisson counting process measurements {Mj(dx)}j=1J, each with intensity sj(y) ∫ℛ3pj(y|x)λ(x)dx, with pj(y|x) being the point spread of the optics focused to the jth plane and sj(y) the detection probability for detector pointy at focal depth j. A maximum a posteriori reconstruction generated by inducing a prior distribution on the space of images via Good’s three-dimensional rotationally invariant roughness penalty ∫ℛ3 [|Δλ(x)|2/λ(x)]dx. It is proven that the sequence of iterates that is generated by using the expectation maximization algorithm is monotonically increasing in posterior probability, with stable points of the iteration satisfying the necessary maximizer conditions of the maximum a posteriori solution. The algorithms were implemented on the DECmpp-SX, a 64 × 64 parallel processor, running at <2 s/(643, 3-D iteration). Results are demonstrated from simulated as well as amoebae and volvox data. We study performance comparisons of the algorithms for the missing-data problems corresponding to fast data collection for rapid motion studies in which every other focal plane is removed and for imaging with limited detector areas and efficiency.

84 citations

Proceedings ArticleDOI
13 Jun 2000
TL;DR: A framework for tracking rigid objects based on an adaptive Bayesian recognition technique that incorporates dependencies between object features and forms a natural feedback loop between the recognition method and the filter that helps to explain robustness.
Abstract: We present a framework for tracking rigid objects based on an adaptive Bayesian recognition technique that incorporates dependencies between object features. At each frame we find a maximum a posteriori (MAP) estimate of the object parameters that include positioning and configuration of non-occluded features. This estimate may be rejected based on its quality. Our careful selection of data points in each frame allows temporal fusion via Kalman filtering. Despite "unimodality" of our tracking scheme, we demonstrate fairly robust results in highly cluttered aerial scenes. Our technique forms a natural feedback loop between the recognition method and the filter that helps to explain such robustness. We study this loop and derive a number of interesting properties. First, the effective threshold for recognition in each frame is adaptive. It depends on the current level of noise in the system. This allows the system to identify partially occluded or distorted objects as long as the predicted locations are accurate. But requires a very good match if there is uncertainty as to the object location. Second, the search area for the recognition method is automatically pruned based on the current system uncertainty, yielding an efficient overall method.

84 citations

Book ChapterDOI
05 Sep 2011
TL;DR: The algorithm, based on the alternating direction method of multipliers (ADMM), is guaranteed to converge to the global optimum of the LP relaxation objective and is competitive with other state-of-the-art algorithms for approximate MAP estimation.
Abstract: Maximum a-posteriori (MAP) estimation is an important task in many applications of probabilistic graphical models. Although finding an exact solution is generally intractable, approximations based on linear programming (LP) relaxation often provide good approximate solutions. In this paper we present an algorithm for solving the LP relaxation optimization problem. In order to overcome the lack of strict convexity, we apply an augmented Lagrangian method to the dual LP. The algorithm, based on the alternating direction method of multipliers (ADMM), is guaranteed to converge to the global optimum of the LP relaxation objective. Our experimental results show that this algorithm is competitive with other state-of-the-art algorithms for approximate MAP estimation.

84 citations

Journal ArticleDOI
TL;DR: This paper deals with the Bayesian signal denoising problem, assuming a prior based on a sparse representation modeling over a unitary dictionary, and shows the superiority of the MMSE estimator, both on synthetically generated signals and on real-world signals.
Abstract: This paper deals with the Bayesian signal denoising problem, assuming a prior based on a sparse representation modeling over a unitary dictionary. It is well known that the maximum a posteriori probability (MAP) estimator in such a case has a closed-form solution based on a simple shrinkage. The focus in this paper is on the better performing and less familiar minimum-mean-squared-error (MMSE) estimator. We show that this estimator also leads to a simple formula, in the form of a plain recursive expression for evaluating the contribution of every atom in the solution. An extension of the model to real-world signals is also offered, considering heteroscedastic nonzero entries in the representation, and allowing varying probabilities for the chosen atoms and the overall cardinality of the sparse representation. The MAP and MMSE estimators are redeveloped for this extended model, again resulting in closed-form simple algorithms. Finally, the superiority of the MMSE estimator is demonstrated both on synthetically generated signals and on real-world signals (image patches).

84 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
85% related
Image processing
229.9K papers, 3.5M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202364
2022125
2021211
2020244
2019250
2018236