scispace - formally typeset
Search or ask a question
Topic

Maximum a posteriori estimation

About: Maximum a posteriori estimation is a research topic. Over the lifetime, 7486 publications have been published within this topic receiving 222291 citations. The topic is also known as: Maximum a posteriori, MAP & maximum a posteriori probability.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents a training-based CIR estimation framework for MC systems, which aims at estimating the CIR based on the observed number of molecules at the receiver due to emission of a sequence of known numbers of molecules by the transmitter.
Abstract: In molecular communication (MC) systems, the expected number of molecules observed at the receiver over time after the instantaneous release of molecules by the transmitter is referred to as the channel impulse response (CIR). Knowledge of the CIR is needed for the design of detection and equalization schemes. In this paper, we present a training-based CIR estimation framework for MC systems, which aims at estimating the CIR based on the observed number of molecules at the receiver due to emission of a sequence of known numbers of molecules by the transmitter. Thereby, we distinguish two scenarios depending on whether or not statistical channel knowledge is available. In particular, we derive maximum likelihood and least sum of square errors estimators, which do not require any knowledge of the channel statistics. For the case, when statistical channel knowledge is available, the corresponding maximum a posteriori and linear minimum mean square error estimators are provided. As performance bound, we derive the classical Cramer Rao (CR) lower bound, valid for any unbiased estimator, which does not exploit statistical channel knowledge, and the Bayesian CR lower bound, valid for any unbiased estimator, which exploits statistical channel knowledge. Finally, we propose the optimal and suboptimal training sequence designs for the considered MC system. Simulation results confirm the analysis and compare the performance of the proposed estimation techniques with the respective CR lower bounds.

108 citations

Journal ArticleDOI
TL;DR: It is shown that the minimization of the Gibbs free energy, corresponding to a Gaussian approximation to the posterior marginalized over the power spectrum, is equivalent to the empirical Bayes ansatz, in which the power Spectrum is fixed to its maximum a posteriori value.
Abstract: We develop a method to infer log-normal random fields from measurement data affected by Gaussian noise. The log-normal model is well suited to describe strictly positive signals with fluctuations whose amplitude varies over several orders of magnitude. We use the formalism of minimum Gibbs free energy to derive an algorithm that uses the signal's correlation structure to regularize the reconstruction. The correlation structure, described by the signal's power spectrum, is thereby reconstructed from the same data set. We show that the minimization of the Gibbs free energy, corresponding to a Gaussian approximation to the posterior marginalized over the power spectrum, is equivalent to the empirical Bayes ansatz, in which the power spectrum is fixed to its maximum a posteriori value. We further introduce a prior for the power spectrum that enforces spectral smoothness. The appropriateness of this prior in different scenarios is discussed and its effects on the reconstruction's results are demonstrated. We validate the performance of our reconstruction algorithm in a series of one- and two-dimensional test cases with varying degrees of non-linearity and different noise levels.

107 citations

Journal ArticleDOI
TL;DR: A family of super-linearly convergent algorithms for solving linear programming (LP) relaxations, based on proximal minimization schemes using Bregman divergences, and proposes graph-structured randomized rounding schemes applicable to iterative LP-solving algorithms in general.
Abstract: The problem of computing a maximum a posteriori (MAP) configuration is a central computational challenge associated with Markov random fields. There has been some focus on "tree-based" linear programming (LP) relaxations for the MAP problem. This paper develops a family of super-linearly convergent algorithms for solving these LPs, based on proximal minimization schemes using Bregman divergences. As with standard message-passing on graphs, the algorithms are distributed and exploit the underlying graphical structure, and so scale well to large problems. Our algorithms have a double-loop character, with the outer loop corresponding to the proximal sequence, and an inner loop of cyclic Bregman projections used to compute each proximal update. We establish convergence guarantees for our algorithms, and illustrate their performance via some simulations. We also develop two classes of rounding schemes, deterministic and randomized, for obtaining integral configurations from the LP solutions. Our deterministic rounding schemes use a "re-parameterization" property of our algorithms so that when the LP solution is integral, the MAP solution can be obtained even before the LP-solver converges to the optimum. We also propose graph-structured randomized rounding schemes applicable to iterative LP-solving algorithms in general. We analyze the performance of and report simulations comparing these rounding schemes.

107 citations

Proceedings ArticleDOI
23 Jun 2013
TL;DR: This paper makes full use of the stereo confidence cues by propagating all confidence values along with the measured disparities in a Bayesian manner and shows that using stereoconfidence cues allows both reducing the number of false object detections by a factor of six while keeping the detection rate at a near constant level.
Abstract: Applications based on stereo vision are becoming increasingly common, ranging from gaming over robotics to driver assistance. While stereo algorithms have been investigated heavily both on the pixel and the application level, far less attention has been dedicated to the use of stereo confidence cues. Mostly, a threshold is applied to the confidence values for further processing, which is essentially a sparsified disparity map. This is straightforward but it does not take full advantage of the available information. In this paper, we make full use of the stereo confidence cues by propagating all confidence values along with the measured disparities in a Bayesian manner. Before using this information, a mapping from confidence values to disparity outlier probability rate is performed based on gathered disparity statistics from labeled video data. We present an extension of the so called Stixel World, a generic 3D intermediate representation that can serve as input for many of the applications mentioned above. This scheme is modified to directly exploit stereo confidence cues in the underlying sensor model during a maximum a posteriori estimation process. The effectiveness of this step is verified in an in-depth evaluation on a large real-world traffic data base of which parts are made publicly available. We show that using stereo confidence cues allows both reducing the number of false object detections by a factor of six while keeping the detection rate at a near constant level.

107 citations

Journal ArticleDOI
TL;DR: A model-based algorithm for the automatic reconstruction of building areas from single-observation meter-resolution SAR intensity data is introduced, based on the maximum a posteriori estimation by Monte Carlo methods of an optimal scene that is modeled as a set of mutually interacting Poisson-distributed marked points describing parametric building objects.
Abstract: To investigate the limits and merits of information extraction from a single high-resolution synthetic aperture radar (SAR) backscatter image, we introduce a model-based algorithm for the automatic reconstruction of building areas from single-observation meter-resolution SAR intensity data. The reconstruction is based on the maximum a posteriori estimation by Monte Carlo methods of an optimal scene that is modeled as a set of mutually interacting Poisson-distributed marked points describing parametric building objects. Each of the objects can be hierarchically decomposed into a collection of radiometrically and geometrically specified object facets that in turn get mapped into data features by ground-to-range projection and inverse Gaussian statistics. The detection of the facets is based on a likelihood ratio. Results are presented for airborne data with resolutions in the range of 0.5-2 m on urban scenes covering agglomerations of buildings. To achieve robust results for building reconstruction, the integration with data from other sources is needed.

107 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
85% related
Image processing
229.9K papers, 3.5M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202364
2022125
2021211
2020244
2019250
2018236