scispace - formally typeset
Search or ask a question
Topic

Maximum a posteriori estimation

About: Maximum a posteriori estimation is a research topic. Over the lifetime, 7486 publications have been published within this topic receiving 222291 citations. The topic is also known as: Maximum a posteriori, MAP & maximum a posteriori probability.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents a novel approach to blind equalization (deconvolution), which is based on direct examination of possible input sequences, and does not rely on a model of the approximative inverse of the channel dynamics.
Abstract: This paper presents a novel approach to blind equalization (deconvolution), which is based on direct examination of possible input sequences. In contrast to many other approaches, it does not rely on a model of the approximative inverse of the channel dynamics. To start with, the blind equalization identifiability problem for a noise-free finite impulse response channel model is investigated. A necessary condition for the input, which is algorithm independent, for blind deconvolution is derived. This condition is expressed in an information measure of the input sequence. A sufficient condition for identifiability is also inferred, which imposes a constraint on the true channel dynamics. The analysis motivates a recursive algorithm where all permissible input sequences are examined. The exact solution is guaranteed to be found as soon as it is possible. An upper bound on the computational complexity of the algorithm is given. This algorithm is then generalized to cope with time-varying infinite impulse response channel models with additive noise. The estimated sequence is an arbitrary good approximation of the maximum a posteriori estimate. The proposed method is evaluated on a Rayleigh fading communication channel. The simulation results indicate fast convergence properties and good tracking abilities. >

45 citations

Proceedings ArticleDOI
14 May 2006
TL;DR: Instead of modelling utterance likelihoods and inferring decision boundaries, C-Aug models directly model the posterior probability of class labels, conditioned on the utterance, which is easy to normalise and can be trained using conditional maximum likelihood estimation.
Abstract: Recently there has been significant interest in developing new acoustic models for speech recognition. One such model, that allows complex dependencies to be represented, is the augmented statistical model. This incorporates additional dependencies by constructing a local exponential expansion of a standard HMM. Unfortunately, the resulting model often has an intractable normalisation term, rendering training difficult for all but binary classification tasks. In this paper, conditional augmented (C-Aug) models are proposed as an attractive alternative. Instead of modelling utterance likelihoods and inferring decision boundaries, C-Aug models directly model the posterior probability of class labels, conditioned on the utterance. The resulting model is easy to normalise and can be trained using conditional maximum likelihood estimation. In addition, as a convex model, the optimisation converges to a global maximum.

45 citations

Proceedings ArticleDOI
21 Mar 2010
TL;DR: In this paper, a maximum a posteriori probability (MAP) detection scheme for mitigating nonlinear phase distortion was proposed and demonstrated 1 dB improvement in system performance and ∼2dB higher nonlinearity tolerance after 6,000 km transmission.
Abstract: We implemented a maximum a posteriori probability (MAP) detection scheme for mitigating nonlinear phase distortion. We demonstrated 1 dB improvement in system performance and ∼2dB higher nonlinearity tolerance after 6,000 km transmission.

45 citations

Proceedings Article
01 Aug 2008
TL;DR: A Monte-Carlo Markov Chain algorithm based on Metropolis scheme is described, and an efficient convergence criterion is provided that leads to a new denoising method called TV-LSE, that produces more realistic images by computing the expectation of the posterior distribution.
Abstract: Total Variation image denoising, generally formulated in a variational setting, can be seen as a Maximum A Posteriori (MAP) Bayesian estimate relying on a simple explicit image prior. In this formulation, the denoised image is the most likely image of the posterior distribution, which favors regularity and produces staircasing artifacts: in regions where smooth-varying intensities would be expected, constant zones appear separated by artificial boundaries. In this paper, we propose to use the Least Square Error (LSE) criterion instead of the MAP. This leads to a new denoising method called TV-LSE, that produces more realistic images by computing the expectation of the posterior distribution. We describe a Monte-Carlo Markov Chain algorithm based on Metropolis scheme, and provide an efficient convergence criterion. We also discuss the properties of TV-LSE, and show in particular that it does not suffer from the staircasing effect.

45 citations

Proceedings ArticleDOI
01 Jan 1999
TL;DR: The author shows that a probabilistic interpretation of support vector machines as maximum a posteriori solutions to inference problems with Gaussian process priors and appropriate likelihood functions gives a clear intuitive meaning to SVM kernels, as covariance functions of GP priors.
Abstract: Support vector machines (SVMs) can be interpreted as maximum a posteriori solutions to inference problems with Gaussian process priors and appropriate likelihood functions. Focusing on the case of classification, the author shows first that such an interpretation gives a clear intuitive meaning to SVM kernels, as covariance functions of GP priors; this can be used to guide the choice of kernel. Next, a probabilistic interpretation allows Bayesian methods to be used for SVMs. Using a local approximation of the posterior around its maximum (the standard SVM solution), he discusses how the evidence for a given kernel and noise parameter can be estimated, and how approximate error bars for the classification of test points can be calculated.

45 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
85% related
Image processing
229.9K papers, 3.5M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202364
2022125
2021211
2020244
2019250
2018236