scispace - formally typeset
Search or ask a question
Topic

Maximum a posteriori estimation

About: Maximum a posteriori estimation is a research topic. Over the lifetime, 7486 publications have been published within this topic receiving 222291 citations. The topic is also known as: Maximum a posteriori, MAP & maximum a posteriori probability.


Papers
More filters
Journal ArticleDOI
TL;DR: The experimentation with several public domain classification datasets suggest that the evidence approach produces the most accurate predictions in the log-score sense, and the evidence-based methods are also quite robust in the sense that they predict surprisingly well even when only a small fraction of the full training set is used.
Abstract: In this paper we are interested in discrete prediction problems for a decision-theoretic setting, where the task is to compute the predictive distribution for a finite set of possible alternatives. This question is first addressed in a general Bayesian framework, where we consider a set of probability distributions defined by some parametric model class. Given a prior distribution on the model parameters and a set of sample data, one possible approach for determining a predictive distribution is to fix the parameters to the instantiation with the maximum a posteriori probability. A more accurate predictive distribution can be obtained by computing the evidence (marginal likelihood), i.e., the integral over all the individual parameter instantiations. As an alternative to these two approaches, we demonstrate how to use Rissanen's new definition of stochastic complexity for determining predictive distributions, and show how the evidence predictive distribution with Jeffrey's prior approaches the new stochastic complexity predictive distribution in the limit with increasing amount of sample data. To compare the alternative approaches in practice, each of the predictive distributions discussed is instantiated in the Bayesian network model family case. In particular, to determine Jeffrey's prior for this model family, we show how to compute the (expected) Fisher information matrix for a fixed but arbitrary Bayesian network structure. In the empirical part of the paper the predictive distributions are compared by using the simple tree-structured Naive Bayes model, which is used in the experiments for computational reasons. The experimentation with several public domain classification datasets suggest that the evidence approach produces the most accurate predictions in the log-score sense. The evidence-based methods are also quite robust in the sense that they predict surprisingly well even when only a small fraction of the full training set is used.

60 citations

Journal ArticleDOI
TL;DR: Through three empirical applications it is demonstrated that the observed-data D ICs have much smaller numerical standard errors compared to the conditional DICs.

60 citations

Journal ArticleDOI
TL;DR: Both off-line and online Bayesian signal processing algorithms to estimate the number of competing terminals and a novel approximate maximum a posteriori (MAP) algorithm for hidden Markov models (HMM) with unknown transition matrix is proposed.
Abstract: The performance of the IEEE 802.11 protocol based on the distributed coordination function (DCF) has been shown to be dependent on the number of competing terminals and the backoff parameters. Better performance can be expected if the parameters are adapted to the number of active users. In this paper we develop both off-line and online Bayesian signal processing algorithms to estimate the number of competing terminals. The estimation is based on the observed use of the channel and the number of competing terminals is modeled as a Markov chain with unknown transition matrix. The off-line estimator makes use of the Gibbs sampler whereas the first online estimator is based on the sequential Monte Carlo (SMC) technique. A deterministic variant of the SMC estimator is then developed, which is simpler to implement and offers superior performance. Finally a novel approximate maximum a posteriori (MAP) algorithm for hidden Markov models (HMM) with unknown transition matrix is proposed. Realistic IEEE 802.11 simulations using the ns-2 network simulator are provided to demonstrate the excellent performance of the proposed estimators

60 citations

Journal ArticleDOI
TL;DR: It is shown that the SGSD algorithm represents an approximation of the gradual deformation method but its search direction has the desirable properties that are lacking in all gradual deformed methods.
Abstract: The simultaneous perturbation stochastic approximation (SPSA) algorithm is modified to obtain a stochastic Gaussian search direction (SGSD) algorithm for automatic history matching. The search direction in the SGSD algorithm is obtained by simultaneously perturbing the reservoir model with unconditional realizations from a Gaussian distribution. This search direction has two nice properties: (1) it is always downhill in the vicinity of the current iterate and (2) the expectation of the stochastic search direction is an approximate quasi-Newton direction with a prior covariance matrix used as the approximate inverse Hessian matrix. For Gaussian reservoir models, we argue and demonstrate that the SGSD algorithm may generate more geologically realistic reservoir description than is obtained with the original SPSA algorithm. It is shown that the SGSD algorithm represents an approximation of the gradual deformation method but its search direction has the desirable properties that are lacking in all gradual deformation methods. The SGSD algorithm is successfully applied to the well-known PUNQ-S3 test case to generate a maximum a posteriori estimate and for uncertainty quantification of reservoir performance predictions using the randomized maximum likelihood method.

60 citations

Journal ArticleDOI
TL;DR: The self-organizing map (SOM) algorithm for finite data is derived as an approximate maximum a posteriori estimation algorithm for a gaussian mixture model with a Gaussian smoothing prior, which is equivalent to a generalized deformable model (GDM).
Abstract: The self-organizing map (SOM) algorithm for finite data is derived as an approximate maximum a posteriori estimation algorithm for a gaussian mixture model with a gaussian smoothing prior, which is equivalent to a generalized deformable model (GDM). For this model, objective criteria for selecting hyperparameters are obtained on the basis of empirical Bayesian estimation and cross-validation, which are representative model selection methods. The properties of these criteria are compared by simulation experiments. These experiments show that the cross-validation methods favor more complex structures than the expected log likelihood supports, which is a measure of compatibility between a model and data distribution. On the other hand, the empirical Bayesian methods have the opposite bias.

60 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
85% related
Image processing
229.9K papers, 3.5M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202364
2022125
2021211
2020244
2019250
2018236