scispace - formally typeset
Search or ask a question
Topic

Maximum a posteriori estimation

About: Maximum a posteriori estimation is a research topic. Over the lifetime, 7486 publications have been published within this topic receiving 222291 citations. The topic is also known as: Maximum a posteriori, MAP & maximum a posteriori probability.


Papers
More filters
Journal ArticleDOI
TL;DR: The singular value decomposition (SVD) method is used to provide a series expansion that, in contrast to the method of sampling functions, permits simple identification of vectors in the minimum-norm space poorly represented in the sample values.
Abstract: A method for the stable interpolation of a bandlimited function known at sample instants with arbitrary locations in the presence of noise is given. Singular value decomposition is used to provide a series expansion that, in contrast to the method of sampling functions, permits simple identification of vectors in the minimum-norm space poorly represented in the sample values. Three methods, Miller regularization, least squares estimation, and maximum a posteriori estimation, are given for obtaining regularized reconstructions when noise is present. The singular value decomposition (SVD) method is used to interrelate these methods. Examples illustrating the technique are given. >

48 citations

Journal ArticleDOI
TL;DR: In this article, the use of a beta prior in trait estimation was extended to the maximum expected a posteriori (MAP) method of Bayesian estimation, called essentially unbiased MAP (EU-MAP).
Abstract: The use of a beta prior in trait estimation was extended to the maximum expected a posteriori (MAP) method of Bayesian estimation. This new method, called essentially unbiased MAP (EU-MAP), was compared with MAP (using a standard normal prior), essentially unbiased expected a posteriori, weighted likelihood, and maximum likelihood estimation methods. Comparisons were made based on the effects that the shape of prior distributions, different item bank characteristics, and practical constraints had on bias, standard error, and root-mean-square error (RMSE). Overall, EU-MAP performed best. This new method significantly reduced bias in fixed-length tests (though with a slight increase in RMSE) and performed reasonably well when a fixed posterior variance termination rule was used. Practical constraints had little effect on the bias of this method.

47 citations

Proceedings ArticleDOI
21 Jun 1994
TL;DR: This paper presents a Markov random field (MRF) model for object recognition in high level vision based on sound mathematical principles from theories of MRF and probability, which is in contrast to heuristic formulations.
Abstract: This paper presents a Markov random field (MRF) model for object recognition in high level vision. The labeling state of a scene in terms of a model object is considered as an MRF or couples MRFs. Within the Bayesian framework the optimal solution is defined as the maximum a posteriori (MAP) estimate of the MRF. The posterior distribution is derived based on sound mathematical principles from theories of MRF and probability, which is in contrast to heuristic formulations. An experimental result is presented. >

47 citations

Journal ArticleDOI
TL;DR: Advantages and disadvantages of joint maximum likelihood, marginal maximum likelihood and Bayesian methods of parameter estimation in item response theory are discussed and compared in this article, where the authors compare the advantages of the three methods.
Abstract: Advantages and disadvantages of joint maximum likelihood, marginal maximum likelihood, and Bayesian methods of parameter estimation in item response theory are discussed and compared

47 citations

Journal ArticleDOI
TL;DR: In the evaluation of continuous speech recognition using decision tree HMMs, the PIC criterion outperforms ML and MDL criteria in building a compact tree structure with moderate tree size and higher recognition rate.
Abstract: This paper surveys a series of model selection approaches and presents a novel predictive information criterion (PIC) for hidden Markov model (HMM) selection. The approximate Bayesian using Viterbi approach is applied for PIC selection of the best HMMs providing the largest prediction information for generalization of future data. When the perturbation of HMM parameters is expressed by a product of conjugate prior densities, the segmental prediction information is derived at the frame level without Laplacian integral approximation. In particular, a multivariate t distribution is attained to characterize the prediction information corresponding to HMM mean vector and precision matrix. When performing model selection in tree structure HMMs, we develop a top-down prior/posterior propagation algorithm for estimation of structural hyperparameters. The prediction information is determined so as to choose the best HMM tree model. Different from maximum likelihood (ML) and minimum description length (MDL) selection criteria, the parameters of PIC chosen HMMs are computed via maximum a posteriori estimation. In the evaluation of continuous speech recognition using decision tree HMMs, the PIC criterion outperforms ML and MDL criteria in building a compact tree structure with moderate tree size and higher recognition rate.

47 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
85% related
Image processing
229.9K papers, 3.5M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202364
2022125
2021211
2020244
2019250
2018236