scispace - formally typeset
Search or ask a question
Topic

Maximum a posteriori estimation

About: Maximum a posteriori estimation is a research topic. Over the lifetime, 7486 publications have been published within this topic receiving 222291 citations. The topic is also known as: Maximum a posteriori, MAP & maximum a posteriori probability.


Papers
More filters
Journal ArticleDOI
TL;DR: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements and has advantages in noise robustness and reconstruction accuracy.
Abstract: Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the jointmore » estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions.Conclusions: The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.« less

55 citations

Journal ArticleDOI
TL;DR: In this paper, a two-dimensional equalizer and four eight-state parallel Bahl-Cocke-Jelinek-Raviv (BCJR) detectors are used for TFP transmission in fiber-optic systems.
Abstract: Time–frequency packing (TFP) transmission provides the highest achievable spectral efficiency with a constrained symbol alphabet and detector complexity. In this paper, the application of the TFP technique to fiber-optic systems is investigated and experimentally demonstrated. The main theoretical aspects, design guidelines, and implementation issues are discussed, focusing on those aspects which are peculiar to TFP systems. In particular, adaptive compensation of propagation impairments, matched filtering, and maximum a posteriori probability detection are obtained by a combination of a two-dimensional equalizer and four eight-state parallel Bahl–Cocke–Jelinek–Raviv (BCJR) detectors. A novel algorithm that ensures adaptive equalization, channel estimation, and a proper distribution of tasks between the equalizer and BCJR detectors is proposed. A set of irregular low-density parity-check codes with different rates is designed to operate at low error rates and approach the spectral efficiency limit achievable by TFP at different signal-to-noise ratios. An experimental demonstration of the designed system is finally provided with five dual-polarization QPSK-modulated optical carriers, densely packed in a 100-GHz bandwidth, employing a recirculating loop to test the performance of the system at different transmission distances.

55 citations

Journal ArticleDOI
TL;DR: The maximum a posteriori (MAP) Bayesian iterative algorithm using priors that are gamma distributed, due to Lange, Bahn and Little, is extended to include parameter choices that fall outside the gamma distribution model.
Abstract: The maximum a posteriori (MAP) Bayesian iterative algorithm using priors that are gamma distributed, due to Lange, Bahn and Little, is extended to include parameter choices that fall outside the gamma distribution model. Special cases of the resulting iterative method include the expectation maximization maximum likelihood (EMML) method based on the Poisson model in emission tomography, as well as algorithms obtained by Parra and Barrett and by Huesman et al. that converge to maximum likelihood and maximum conditional likelihood estimates of radionuclide intensities for list-mode emission tomography. The approach taken here is optimization-theoretic and does not rely on the usual expectation maximization (EM) formalism. Block-iterative variants of the algorithms are presented. A self-contained, elementary proof of convergence of the algorithm is included.

55 citations

Proceedings ArticleDOI
23 Jun 2008
TL;DR: A new hierarchical Bayesian model is proposed for image segmentation based on Gaussian mixture models (GMM) with a prior enforcing spatial smoothness based on Studentpsilas t-distributed, which allows this prior to impose smoothness and simultaneously model the edges between the segments of the image.
Abstract: A new hierarchical Bayesian model is proposed for image segmentation based on Gaussian mixture models (GMM) with a prior enforcing spatial smoothness. According to this prior, the local differences of the contextual mixing proportions (i.e. the probabilities of class labels) are Studentpsilas t-distributed. The generative properties of the Student's t-pdf allow this prior to impose smoothness and simultaneously model the edges between the segments of the image. A maximum a posteriori (MAP) expectation-maximization (EM) based algorithm is used for Bayesian inference. An important feature of this algorithm is that all the parameters are automatically estimated from the data in closed form. Numerical experiments are presented that demonstrate the superiority of the proposed model for image segmentation as compared to standard GMM-based approaches and to GMM segmentation techniques with ldquostandardrdquo spatial smoothness constraints.

55 citations

Journal ArticleDOI
Adam Lenart1
TL;DR: In this article, the Gompertz distribution is used to describe the distribution of adult deaths and exact formulas can be derived for its moment-generating function and central moments based on the exact central moments.
Abstract: The Gompertz distribution is widely used to describe the distribution of adult deaths. Previous works concentrated on formulating approximate relationships to characterise it. However, using the generalised integro-exponential function, exact formulas can be derived for its moment-generating function and central moments. Based on the exact central moments, higher accuracy approximations can be defined for them. In demographic or actuarial applications, maximum likelihood estimation is often used to determine the parameters of the Gompertz distribution. By solving the maximum likelihood estimates analytically, the dimension of the optimisation problem can be reduced to one both in the case of discrete and continuous data. Monte Carlo experiments show that by ML estimation, higher accuracy estimates can be acquired than by the method of moments.

55 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
85% related
Image processing
229.9K papers, 3.5M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202364
2022125
2021211
2020244
2019250
2018236