scispace - formally typeset
Search or ask a question
Topic

Maximum a posteriori estimation

About: Maximum a posteriori estimation is a research topic. Over the lifetime, 7486 publications have been published within this topic receiving 222291 citations. The topic is also known as: Maximum a posteriori, MAP & maximum a posteriori probability.


Papers
More filters
Journal ArticleDOI
TL;DR: Adopting the expectation-maximization (EM) algorithm for use in computing the maximum a posteriori (MAP) estimate corresponding to the model, it is found that the model permits remarkably simple, closed-form expressions for the EM update equations.
Abstract: This paper describes a statistical multiscale modeling and analysis framework for linear inverse problems involving Poisson data The framework itself is founded upon a multiscale analysis associated with recursive partitioning of the underlying intensity, a corresponding multiscale factorization of the likelihood (induced by this analysis), and a choice of prior probability distribution made to match this factorization by modeling the "splits" in the underlying partition The class of priors used here has the interesting feature that the "noninformative" member yields the traditional maximum-likelihood solution; other choices are made to reflect prior belief as to the smoothness of the unknown intensity Adopting the expectation-maximization (EM) algorithm for use in computing the maximum a posteriori (MAP) estimate corresponding to our model, we find that our model permits remarkably simple, closed-form expressions for the EM update equations The behavior of our EM algorithm is examined, and it is shown that convergence to the global MAP estimate can be guaranteed Applications in emission computed tomography and astronomical energy spectral analysis demonstrate the potential of the new approach

130 citations

Proceedings Article
08 Dec 2008
TL;DR: This work shows how smoother priors can preserve the benefits of these sparse priors while adding stability to the Maximum A-Posteriori estimate, and finds that online optimization of the parameters of the KL-regularized model can significantly improve prediction performance.
Abstract: Prior work has shown that features which appear to be biologically plausible as well as empirically useful can be found by sparse coding with a prior such as a laplacian (L1) that promotes sparsity. We show how smoother priors can preserve the benefits of these sparse priors while adding stability to the Maximum A-Posteriori (MAP) estimate that makes it more useful for prediction problems. Additionally, we show how to calculate the derivative of the MAP estimate efficiently with implicit differentiation. One prior that can be differentiated this way is KL-regularization. We demonstrate its effectiveness on a wide variety of applications, and find that online optimization of the parameters of the KL-regularized model can significantly improve prediction performance.

130 citations

Posted Content
TL;DR: In this paper, a framework for learning/estimating graphs from data is proposed, which includes formulation of various graph learning problems, their probabilistic interpretations and associated algorithms, and specialized algorithms are developed by incorporating the graph Laplacian and structural constraints.
Abstract: Graphs are fundamental mathematical structures used in various fields to represent data, signals and processes. In this paper, we propose a novel framework for learning/estimating graphs from data. The proposed framework includes (i) formulation of various graph learning problems, (ii) their probabilistic interpretations and (iii) associated algorithms. Specifically, graph learning problems are posed as estimation of graph Laplacian matrices from some observed data under given structural constraints (e.g., graph connectivity and sparsity level). From a probabilistic perspective, the problems of interest correspond to maximum a posteriori (MAP) parameter estimation of Gaussian-Markov random field (GMRF) models, whose precision (inverse covariance) is a graph Laplacian matrix. For the proposed graph learning problems, specialized algorithms are developed by incorporating the graph Laplacian and structural constraints. The experimental results demonstrate that the proposed algorithms outperform the current state-of-the-art methods in terms of accuracy and computational efficiency.

129 citations

Journal ArticleDOI
TL;DR: One of the interests of the method is its ability to give the best solution, according to the resolution level required by the user, that is, to the prior distribution chosen.
Abstract: Segmentation of a nonstationary process consists in assuming piecewise stationarity and in detecting the instants of change. We consider the case where all the data is available at the same time and perform a global segmentation instead of a sequential procedure. We build a change process and define arbitrarily its prior distribution. This allows us to propose the MAP estimate as well as some minimum contrast estimate as a solution. One of the interests of the method is its ability to give the best solution, according to the resolution level required by the user, that is, to the prior distribution chosen. The method can address a wide class of parametric and nonparametric models. Simulations and applications to real data are proposed.

128 citations

Journal Article
TL;DR: This paper illustrates the situations where standard EP fails to converge and review different modifications and alternative algorithms for improving the convergence and demonstrates that convergence problems may occur during the type-II maximum a posteriori (MAP) estimation of the hyperparameters.
Abstract: This paper considers the robust and efficient implementation of Gaussian process regression with a Student-t observation model, which has a non-log-concave likelihood. The challenge with the Student-t model is the analytically intractable inference which is why several approximative methods have been proposed. Expectation propagation (EP) has been found to be a very accurate method in many empirical studies but the convergence of EP is known to be problematic with models containing non-log-concave site functions. In this paper we illustrate the situations where standard EP fails to converge and review different modifications and alternative algorithms for improving the convergence. We demonstrate that convergence problems may occur during the type-II maximum a posteriori (MAP) estimation of the hyperparameters and show that standard EP may not converge in the MAP values with some difficult data sets. We present a robust implementation which relies primarily on parallel EP updates and uses a moment-matching-based double-loop algorithm with adaptively selected step size in difficult cases. The predictive performance of EP is compared with Laplace, variational Bayes, and Markov chain Monte Carlo approximations.

127 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
85% related
Image processing
229.9K papers, 3.5M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202364
2022125
2021211
2020244
2019250
2018236