scispace - formally typeset
Search or ask a question
Topic

Maximum a posteriori estimation

About: Maximum a posteriori estimation is a research topic. Over the lifetime, 7486 publications have been published within this topic receiving 222291 citations. The topic is also known as: Maximum a posteriori, MAP & maximum a posteriori probability.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel cooperative localization algorithm for the scenario where AUVs are localized by using range measurements from a single surface mobile beacon is proposed and the observability and improved localization accuracy of the proposed localization algorithm are verified in a customized underwater simulator by extensive numerical simulations.

45 citations

Proceedings Article
25 Jul 2015
TL;DR: This paper proposes two myopic query strategies to choose where to evaluate the likelihood and implement them using Gaussian processes, demonstrating that this approach is significantly more query efficient than existing techniques and other heuristics for posterior estimation.
Abstract: This paper studies active posterior estimation in a Bayesian setting when the likelihood is expensive to evaluate. Existing techniques for posterior estimation are based on generating samples representative of the posterior. Such methods do not consider efficiency in terms of likelihood evaluations. In order to be query efficient we treat posterior estimation in an active regression framework. We propose two myopic query strategies to choose where to evaluate the likelihood and implement them using Gaussian processes. Via experiments on a series of synthetic and real examples we demonstrate that our approach is significantly more query efficient than existing techniques and other heuristics for posterior estimation.

45 citations

Journal ArticleDOI
TL;DR: This paper proposes a maximum a posteriori estimation of the pattern of lost macroblocks, which assumes the knowledge of the decoded pixels only, which produces an accurate estimate of the mean-square-error (MSE) distortion introduced by channel errors.
Abstract: Video transmitted over an error-prone network may be received at the decoder with degradations due to packet losses. No-reference quality monitoring algorithms are the most practical way to measure the quality of the received video, since they do not impose any change with respect to the network architecture. Conventionally, these methods assume the availability of the corrupted bitstream. In some situations this is not possible, e.g., because the bitstream is encrypted or processed by third-party decoders, and only the decoded pixel values can be used. The major issue in this scenario is the lack of knowledge about which regions of the video have been actually lost, which is a fundamental ingredient for estimating channel-induced distortion. In this paper, we propose a maximum a posteriori estimation of the pattern of lost macroblocks, which assumes the knowledge of the decoded pixels only. This information can be used as input to a no-reference quality monitoring system, which produces an accurate estimate of the mean-square-error (MSE) distortion introduced by channel errors. The results of the proposed method are well correlated with the MSE distortion computed in full-reference mode, with a linear correlation coefficient equal to 0.9 at frame level and 0.98 at sequence level.

45 citations

Proceedings ArticleDOI
04 Dec 2000
TL;DR: To learn an environmentally-adapted dictionary capable of concise expression of signals generated by the environment, this work develops algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS, an affine scaling transformation (ACT)-like sparse signal representation algorithm recently developed at UCSD, and an update of the dictionary using these sparse representations.
Abstract: Algorithms for data-driven learning of domain-specific over complete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur- concave negative log-priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen dictionary. The elements of the dictionary can be interpreted as 'concepts,' features or 'words' capable of succinct expression of events encountered in the environment. This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries, but not necessarily as succinct as one entry. To learn an environmentally-adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS, an affine scaling transformation (ACT)-like sparse signal representation algorithm recently developed at UCSD, and an update of the dictionary using these sparse representations.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

45 citations

Proceedings ArticleDOI
09 May 1995
TL;DR: A novel speech adaptation algorithm that enables adaptation even with a small amount of speech data and a higher phoneme recognition performance was obtained by using this algorithm than with individual methods, showing the superiority of the proposed algorithm.
Abstract: The paper proposes a novel speech adaptation algorithm that enables adaptation even with a small amount of speech data This is a unified algorithm of two efficient conventional speaker adaptation techniques, which are maximum a posteriori (MAP) estimation and transfer vector field smoothing (VFS) This algorithm is designed to avoid the weaknesses of both MAP and VFS A higher phoneme recognition performance was obtained by using this algorithm than with individual methods, showing the superiority of the proposed algorithm The phoneme recognition error rate was reduced from 220% to 191% using this algorithm for a speaker-independent model with seven adaptation phrases Furthermore, a priori knowledge concerning speaker characteristics was obtained for this algorithm by generating an initial HMM with the speech of a selected speaker cluster based on speaker similarity The adaptation using this initial model reduced the phoneme recognition error rate from 220% to 177%

45 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
85% related
Image processing
229.9K papers, 3.5M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202364
2022125
2021211
2020244
2019250
2018236