scispace - formally typeset
Search or ask a question
Topic

Maximum a posteriori estimation

About: Maximum a posteriori estimation is a research topic. Over the lifetime, 7486 publications have been published within this topic receiving 222291 citations. The topic is also known as: Maximum a posteriori, MAP & maximum a posteriori probability.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents random field models for noisy and textured image data based upon a hierarchy of Gibbs distributions, and presents dynamic programming based segmentation algorithms for chaotic images, considering a statistical maximum a posteriori (MAP) criterion.
Abstract: This paper presents a new approach to the use of Gibbs distributions (GD) for modeling and segmentation of noisy and textured images. Specifically, the paper presents random field models for noisy and textured image data based upon a hierarchy of GD. It then presents dynamic programming based segmentation algorithms for noisy and textured images, considering a statistical maximum a posteriori (MAP) criterion. Due to computational concerns, however, sub-optimal versions of the algorithms are devised through simplifying approximations in the model. Since model parameters are needed for the segmentation algorithms, a new parameter estimation technique is developed for estimating the parameters in a GD. Finally, a number of examples are presented which show the usefulness of the Gibbsian model and the effectiveness of the segmentation algorithms and the parameter estimation procedures.

1,092 citations

Journal ArticleDOI
TL;DR: This work explores a number of low-complexity soft-input/soft-output (SISO) equalization algorithms based on the minimum mean square error (MMSE) criterion and shows that for the turbo equalization application, the MMSE-based SISO equalizers perform well compared with a MAP equalizer while providing a tremendous complexity reduction.
Abstract: A number of important advances have been made in the area of joint equalization and decoding of data transmitted over intersymbol interference (ISI) channels. Turbo equalization is an iterative approach to this problem, in which a maximum a posteriori probability (MAP) equalizer and a MAP decoder exchange soft information in the form of prior probabilities over the transmitted symbols. A number of reduced-complexity methods for turbo equalization have been introduced in which MAP equalization is replaced with suboptimal, low-complexity approaches. We explore a number of low-complexity soft-input/soft-output (SISO) equalization algorithms based on the minimum mean square error (MMSE) criterion. This includes the extension of existing approaches to general signal constellations and the derivation of a novel approach requiring less complexity than the MMSE-optimal solution. All approaches are qualitatively analyzed by observing the mean-square error averaged over a sequence of equalized data. We show that for the turbo equalization application, the MMSE-based SISO equalizers perform well compared with a MAP equalizer while providing a tremendous complexity reduction.

985 citations

Journal ArticleDOI
TL;DR: A maximum a posteriori (MAP) framework for jointly estimating image registration parameters and the high-resolution image is presented and experimental results are provided to illustrate the performance of the proposed MAP algorithm using both visible and infrared images.
Abstract: In many imaging systems, the detector array is not sufficiently dense to adequately sample the scene with the desired field of view. This is particularly true for many infrared focal plane arrays. Thus, the resulting images may be severely aliased. This paper examines a technique for estimating a high-resolution image, with reduced aliasing, from a sequence of undersampled frames. Several approaches to this problem have been investigated previously. However, in this paper a maximum a posteriori (MAP) framework for jointly estimating image registration parameters and the high-resolution image is presented. Several previous approaches have relied on knowing the registration parameters a priori or have utilized registration techniques not specifically designed to treat severely aliased images. In the proposed method, the registration parameters are iteratively updated along with the high-resolution image in a cyclic coordinate-descent optimization procedure. Experimental results are provided to illustrate the performance of the proposed MAP algorithm using both visible and infrared images. Quantitative error analysis is provided and several images are shown for subjective evaluation.

936 citations

Proceedings ArticleDOI
17 Oct 2005
TL;DR: The human detection problem is formulated as maximum a posteriori (MAP) estimation, and edgelet features are introduced, which are a new type of silhouette oriented features that are learned by a boosting method.
Abstract: This paper proposes a method for human detection in crowded scene from static images. An individual human is modeled as an assembly of natural body parts. We introduce edgelet features, which are a new type of silhouette oriented features. Part detectors, based on these features, are learned by a boosting method. Responses of part detectors are combined to form a joint likelihood model that includes cases of multiple, possibly inter-occluded humans. The human detection problem is formulated as maximum a posteriori (MAP) estimation. We show results on a commonly used previous dataset as well as new data sets that could not be processed by earlier methods.

903 citations

Journal ArticleDOI
TL;DR: The authors apply flexible constraints, in the form of a probabilistic deformable model, to the problem of segmenting natural 2-D objects whose diversity and irregularity of shape make them poorly represented in terms of fixed features or form.
Abstract: Segmentation using boundary finding is enhanced both by considering the boundary as a whole and by using model-based global shape information. The authors apply flexible constraints, in the form of a probabilistic deformable model, to the problem of segmenting natural 2-D objects whose diversity and irregularity of shape make them poorly represented in terms of fixed features or form. The parametric model is based on the elliptic Fourier decomposition of the boundary. Probability distributions on the parameters of the representation bias the model to a particular overall shape while allowing for deformations. Boundary finding is formulated as an optimization problem using a maximum a posteriori objective function. Results of the method applied to real and synthetic images are presented, including an evaluation of the dependence of the method on prior information and image quality. >

888 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
85% related
Image processing
229.9K papers, 3.5M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202364
2022125
2021211
2020244
2019250
2018236