scispace - formally typeset
Search or ask a question

Showing papers on "Maximum a posteriori estimation published in 1992"


Journal ArticleDOI
TL;DR: The authors apply flexible constraints, in the form of a probabilistic deformable model, to the problem of segmenting natural 2-D objects whose diversity and irregularity of shape make them poorly represented in terms of fixed features or form.
Abstract: Segmentation using boundary finding is enhanced both by considering the boundary as a whole and by using model-based global shape information. The authors apply flexible constraints, in the form of a probabilistic deformable model, to the problem of segmenting natural 2-D objects whose diversity and irregularity of shape make them poorly represented in terms of fixed features or form. The parametric model is based on the elliptic Fourier decomposition of the boundary. Probability distributions on the parameters of the representation bias the model to a particular overall shape while allowing for deformations. Boundary finding is formulated as an optimization problem using a maximum a posteriori objective function. Results of the method applied to real and synthetic images are presented, including an evaluation of the dependence of the method on prior information and image quality. >

888 citations


Journal ArticleDOI
TL;DR: A stochastic approach to the estimation of 2D motion vector fields from time-varying images is presented and the maximum a posteriori probability (MAP) estimation is incorporated into a hierarchical environment to deal efficiently with large displacements.
Abstract: A stochastic approach to the estimation of 2D motion vector fields from time-varying images is presented. The formulation involves the specification of a deterministic structural model along with stochastic observation and motion field models. Two motion models are proposed: a globally smooth model based on vector Markov random fields and a piecewise smooth model derived from coupled vector-binary Markov random fields. Two estimation criteria are studied. In the maximum a posteriori probability (MAP) estimation, the a posteriori probability of motion given data is maximized, whereas in the minimum expected cost (MEC) estimation, the expectation of a certain cost function is minimized. Both algorithms generate sample fields by means of stochastic relaxation implemented via the Gibbs sampler. Two versions are developed: one for a discrete state space and the other for a continuous state space. The MAP estimation is incorporated into a hierarchical environment to deal efficiently with large displacements. >

345 citations


Journal ArticleDOI
Yariv Ephraim1
TL;DR: A Bayesian estimation approach for enhancing speech signals which have been degraded by statistically independent additive noise is motivated and developed, and minimum mean square error (MMSE) and maximum a posteriori (MAP) signal estimators are developed using hidden Markov models for the clean signal and the noise process.
Abstract: A Bayesian estimation approach for enhancing speech signals which have been degraded by statistically independent additive noise is motivated and developed. In particular, minimum mean square error (MMSE) and maximum a posteriori (MAP) signal estimators are developed using hidden Markov models (HMMs) for the clean signal and the noise process. It is shown that the MMSE estimator comprises a weighted sum of conditional mean estimators for the composite states of the noisy signal, where the weights equal the posterior probabilities of the composite states given the noisy signal. The estimation of several spectral functionals of the clean signal such as the sample spectrum and the complex exponential of the phase is also considered. A gain-adapted MAP estimator is developed using the expectation-maximization algorithm. The theoretical performance of the MMSE estimator is discussed, and convergence of the MAP estimator is proved. Both the MMSE and MAP estimators are tested in enhancing speech signals degraded by white Gaussian noise at input signal-to-noise ratios of from 5 to 20 dB. >

214 citations


Journal ArticleDOI
TL;DR: The design problem for the Linear Bayes Estimator Characterization of Optimal Designs Construction of Optimimal Designs construction of optimal continuous designs Construction of Exact Optimal designs.
Abstract: Estimation and Design as a Bayesian Decision Problem Choice of a Prior Distribution Conjugate Prior Distributions Bayes Estimation of the Regression Parameter Optimality and Robustness of the Bayes Estimator Bayesian Interpretation of Estimators Using Non-Bayesian Prior Knowledge Bayes Estimation in Case of Prior Ignorance Further Problems The Design Problem for the Linear Bayes Estimator Characterization of Optimal Designs Construction of Optimal Continuous Designs Construction of Exact Optimal Designs.

177 citations


Journal ArticleDOI
TL;DR: This approach provides a systematic method for organizing and representing domain knowledge through appropriate design of the clique functions describing the Gibbs distribution representing the pdf of the underlying MRF.
Abstract: An image is segmented into a collection of disjoint regions that form the nodes of an adjacency graph, and image interpretation is achieved through assigning object labels (or interpretations) to the segmented regions (or nodes) using domain knowledge, extracted feature measurements, and spatial relationships between the various regions. The interpretation labels are modeled as a Markov random field (MRF) on the corresponding adjacency graph, and the image interpretation problem is then formulated as a maximum a posteriori (MAP) estimation rule, given domain knowledge and region-based measurements. Simulated annealing is used to find this best realization or optimal MAP interpretation. This approach also provides a systematic method for organizing and representing domain knowledge through appropriate design of the clique functions describing the Gibbs distribution representing the pdf of the underlying MRF. A general methodology is provided for the design of the clique functions. Results of image interpretation experiments on synthetic and real-world images are described. >

137 citations


Journal ArticleDOI
TL;DR: It is shown that dual polarization SAR data can yield segmentation resultS similar to those obtained with fully polarimetric SAR data, and the performance of the MAP segmentation technique is evaluated.
Abstract: A statistical image model is proposed for segmenting polarimetric synthetic aperture radar (SAR) data into regions of homogeneous and similar polarimetric backscatter characteristics. A model for the conditional distribution of the polarimetric complex data is combined with a Markov random field representation for the distribution of the region labels to obtain the posterior distribution. Optimal region labeling of the data is then defined as maximizing the posterior distribution of the region labels given the polarimetric SAR complex data (maximum a posteriori (MAP) estimate). Two procedures for selecting the characteristics of the regions are then discussed. Results using real multilook polarimetric SAR complex data are given to illustrate the potential of the two selection procedures and evaluate the performance of the MAP segmentation technique. It is also shown that dual polarization SAR data can yield segmentation resultS similar to those obtained with fully polarimetric SAR data. >

135 citations


Journal ArticleDOI
TL;DR: It is found that the choice of the shape of the prior distribution affects the noise characteristics and edge sharpness in the final estimate and can generate reconstructions closer to the actual solution than maximum likelihood (ML).
Abstract: The effects of several of Gibbs prior distributions in terms of noise characteristics, edge sharpness, and overall quantitative accuracy of the final estimates obtained from an iterative maximum a posteriori (MAP) procedure applied to data from a realistic chest phantom are demonstrated. The effects of the adjustable parameters built into the prior distribution on these properties are examined. It is found that these parameter values influence the noise and edge characteristics of the final estimate and can generate reconstructions closer to the actual solution than maximum likelihood (ML). In addition, it is found that the choice of the shape of the prior distribution affects the noise characteristics and edge sharpness in the final estimate. >

112 citations


Journal ArticleDOI
01 Aug 1992
TL;DR: An algorithm which circumvents this problem by updating connected groups of pixels formed in an intermediate segmentation step is proposed, which substantially increased the rate of convergence and the quality of the reconstruction.
Abstract: The authors present a method for nondifferentiable optimization in maximum a posteriori estimation of computed transmission tomograms. This problem arises in the application of a Markov random field image model with absolute value potential functions. Even though the required optimization is on a convex function, local optimization methods, which iteratively update pixel values, become trapped on the nondifferentiable edges of the function. An algorithm which circumvents this problem by updating connected groups of pixels formed in an intermediate segmentation step is proposed. Experimental results showed that this approach substantially increased the rate of convergence and the quality of the reconstruction. >

95 citations


Journal ArticleDOI
TL;DR: The authors propose a method of direction of arrival (DOA) estimation of signals in the presence of noise whose covariance matrix is unknown and arbitrary, other than being positive definite.
Abstract: The authors propose a method of direction of arrival (DOA) estimation of signals in the presence of noise whose covariance matrix is unknown and arbitrary, other than being positive definite. They examine the projection of the data onto the noise subspace. The conditional probability density function (PDF) of the projected data given the signal parameters and the unknown projected noise covariance matrix is first formed. The a posteriori PDF of the signal parameters alone is then obtained by assigning a noninformative a priori PDF to the unknown noise covariance matrix and integrating out this quantity. A simple criterion for the maximum a posteriori (MAP) estimate of the DOAs of the signals is established. Some properties of this criterion are discussed, and an efficient numerical algorithm for the implementation of this criterion is developed. The advantage of this method is that the noise covariance matrix does not have to be known, nor must it be estimated. >

81 citations


Proceedings ArticleDOI
23 Feb 1992
TL;DR: Because of its adaptive nature, Bayesian learning serves as a unified approach for the following four speech recognition applications, namely parameter smoothing, speaker adaptation, speaker group modeling and corrective training.
Abstract: We discuss maximum a posteriori estimation of continuous density hidden Markov models (CDHMM). The classical MLE reestimation algorithms, namely the forward-backward algorithm and the segmental k-means algorithm, are expanded and reestimation formulas are given for HMM with Gaussian mixture observation densities. Because of its adaptive nature, Bayesian learning serves as a unified approach for the following four speech recognition applications, namely parameter smoothing, speaker adaptation, speaker group modeling and corrective training. New experimental results on all four applications are provided to show the effectiveness of the MAP estimation approach.

80 citations


Journal ArticleDOI
Yariv Ephraim1
TL;DR: A unified approach is developed for gain adaptation in recognition of clean and noisy signals using hidden Markov models for gain-normalized clean signals designed using maximum-likelihood estimates of the gain contours of the clean training sequences.
Abstract: In applying hidden Markov modeling for recognition of speech signals, the matching of the energy contour of the signal to the energy contour of the model for that signal is normally achieved by appropriate normalization of each vector of the signal prior to both training and recognition. This approach, however, is not applicable when only noisy signals are available for recognition. A unified approach is developed for gain adaptation in recognition of clean and noisy signals. In this approach, hidden Markov models (HMMs) for gain-normalized clean signals are designed using maximum-likelihood (ML) estimates of the gain contours of the clean training sequences. The models are combined with ML estimates of the gain contours of the clean test signals, obtained from the given clean or noisy signals, in performing recognition using the maximum a posteriori decision rule. The gain-adapted training and recognition algorithms are developed for HMMs with Gaussian subsources using the expectation-minimization (EM) approach. >

Journal ArticleDOI
TL;DR: Experimental results verify the superiority of the proposed ML-estimator and the L-ESTimator over the straightforward choice of an arithmetic mean for speckle filtering in simulated tissue mimicking phantom ultrasound B-mode images.

Journal ArticleDOI
TL;DR: The singular value decomposition (SVD) method is used to provide a series expansion that, in contrast to the method of sampling functions, permits simple identification of vectors in the minimum-norm space poorly represented in the sample values.
Abstract: A method for the stable interpolation of a bandlimited function known at sample instants with arbitrary locations in the presence of noise is given. Singular value decomposition is used to provide a series expansion that, in contrast to the method of sampling functions, permits simple identification of vectors in the minimum-norm space poorly represented in the sample values. Three methods, Miller regularization, least squares estimation, and maximum a posteriori estimation, are given for obtaining regularized reconstructions when noise is present. The singular value decomposition (SVD) method is used to interrelate these methods. Examples illustrating the technique are given. >

Journal ArticleDOI
TL;DR: This paper presents an algorithm which is as fast as the second of these previously proposed algorithms, but it shares with the first the desirable property that it is guaranteed to converge to the reconstruction which is optimal according to the MAP criterion.

Proceedings ArticleDOI
23 Mar 1992
TL;DR: A new approach to Bayesian image segmentation based on a novel multiscale random field and a new estimation approach called sequential maximum a posteriori estimation are presented, resulting in a segmentation algorithm which is not iterative and can be computed in time proportional to MN.
Abstract: A new approach to Bayesian image segmentation based on a novel multiscale random field (MSRF) and a new estimation approach called sequential maximum a posteriori estimation are presented. Together, the proposed estimator and model result in a segmentation algorithm which is not iterative and can be computed in time proportional to MN where M is the number of classes and N is the number of pixels. A method for estimating the parameters of the multiscale model directly from the image during the segmentation process is developed. >

Journal ArticleDOI
TL;DR: The edge preserving restoration of piecewise smooth images is formulated in terms of a probabilistic approach, and a MAP estimate algorithm is proposed which could be implemented on a hybrid neural network.

Journal ArticleDOI
TL;DR: A new blind equalization algorithm is presented that incorporates a Bayesian channel estimator and a decision-feedback (DF) adaptive filter that is more robust to catastrophic error propagation and only a modest increase in the computational complexity.
Abstract: A new blind equalization algorithm is presented that incorporates a Bayesian channel estimator and a decision-feedback (DF) adaptive filter. The Bayesian algorithm operates as a preprocessor on the received signal to provide an initial estimate of the channel coefficients. It is an approximate maximum a posteriori (MAP) sequence estimator that generates reliable estimates of the transmitted symbols. These decisions are then filtered by an adaptive decision-feedback algorithm to further reduce the intersymbol interference. The new algorithm is more robustto catastrophic error propagation thanthe standard decision-feedback equalizer (DFE), with only a modest increase in the computational complexity.

Journal ArticleDOI
TL;DR: The method of maximum entropy is applied to the regularization of inverse synthetic aperture radar (ISAR) image reconstructions and allows for a more general relationship between the image and its 'configuration entropy' than is usually employed.
Abstract: The method of maximum entropy is applied to the regularization of inverse synthetic aperture radar (ISAR) image reconstructions. This is accomplished by considering an ensemble of images with associated 'allowed' probability density functions. Instead of directly considering the 'solution' to be an image, the author takes it to be the a posteriori probability density found by minimizing a regularization functional composed of the usual 'least squares' term and a Kullback (cross-entropy) information difference term. The desired image is then found as the expectation of this density. The basic model of this approach is similar to that used in usual maximum a posteriori analysis and allows for a more general relationship between the image and its 'configuration entropy' than is usually employed. In addition, it eliminates the need for inappropriate nonnegativity constraints on the (generally complex-valued) image. >

Proceedings ArticleDOI
23 Mar 1992
TL;DR: The maximum a posteriori (MAP) estimation technique that is proposed results in the constrained optimization of a convex functional and produces an expanded image with improved definition.
Abstract: A method for nonlinear image expansion is introduced. The method preserves the discontinuities of the original image, therefore producing an expanded image with improved definition. The maximum a posteriori (MAP) estimation technique that is proposed results in the constrained optimization of a convex functional. The expanded image produced from this method will be shown to be aesthetically and quantitatively superior to images expanded by the standard methods of replication, linear interpolation, and cubic B-spline expansion. >

Proceedings ArticleDOI
23 Mar 1992
TL;DR: Two neural algorithms for image restoration are proposed which implement iterative minimization algorithms which are proved to converge and the robustness of these algorithms with respect to finite numerical precision is studied.
Abstract: Two neural algorithms for image restoration are proposed. The image is considered degraded by linear blur and additive white Gaussian noise. Maximum a posteriori estimation and regularization theory applied to this problem lead to the same high dimension optimization problem. The developed schemes, one having a sequential updating schedule and the other being fully parallel, implement iterative minimization algorithms which are proved to converge. The robustness of these algorithms with respect to finite numerical precision is studied. Examples with real images are presented. >

Journal ArticleDOI
TL;DR: The asymptotic behavior and the statistical performance of the maximum a posteriori (MAP) estimate of the directions of arrival (DOAs) of signals applicable in an environment where the sensor noise is unknown and correlated is presented.
Abstract: For pt.I see ibid., vol.40, no.8, p.2007-17 (1992). The asymptotic behavior and the statistical performance of the maximum a posteriori (MAP) estimate of the directions of arrival (DOAs) of signals applicable in an environment where the sensor noise is unknown and correlated. Computer simulations were performed to confirm the correctness of the performance analysis, and a comparison of the performance of the MAP estimate with other methods is also presented. >

Journal ArticleDOI
TL;DR: The authors compare the filtered backprojection algorithm, the expectation-maximization maximum likelihood algorithm, and the generalized expectation maximization (GEM) maximum a posteriori (MAP) algorithm, a Bayesian algorithm which uses a Markov random field prior.
Abstract: In single photon emission computed tomography (SPECT), every reconstruction algorithm must use some model for the response of the gamma camera to emitted gamma -rays. The true camera response is both spatially variant and object dependent. These two properties result from the effects of scatter, septal penetration, and attenuation, and they forestall determination of the true response with any precision. This motivates the investigation of the performance of reconstruction algorithms when there are errors between the camera response used in the reconstruction algorithm and the true response of the gamma camera. In this regard, the authors compare the filtered backprojection algorithm, the expectation-maximization maximum likelihood algorithm, and the generalized expectation maximization (GEM) maximum a posteriori (MAP) algorithm, a Bayesian algorithm which uses a Markov random field prior. >

Proceedings Article
12 Jul 1992
TL;DR: An approach to learning from noisy data that views the problem as one of reasoning under uncertainty, where prior knowledge of the noise process is applied to compute a posteriori probabilities over the hypothesis space is presented.
Abstract: This paper presents an approach to learning from noisy data that views the problem as one of reasoning under uncertainty, where prior knowledge of the noise process is applied to compute a posteriori probabilities over the hypothesis space. In preliminary experiments this maximum a posteriori (MAP) approach exhibits a learning rate advantage over the C4.5 algorithm that is statistically significant.

Proceedings ArticleDOI
Yariv Ephraim1
23 Mar 1992
TL;DR: A time-varying linear dynamical system model for speech signals that generalizes the standard hidden Markov model in the sense that vectors generated from a given sequence of states are assumed a first order Markov process rather than a sequence of statistically independent vectors.
Abstract: A time-varying linear dynamical system model for speech signals is proposed. The model generalizes the standard hidden Markov model (HMM) in the sense that vectors generated from a given sequence of states are assumed a first order Markov process rather than a sequence of statistically independent vectors. The reestimation formulas for the model parameters are developed using the Baum algorithm. The forward formula for evaluating the likelihood of a given sequence of signal vectors in speech recognition applications is also developed. The dynamical system model is used in developing minimum mean square error (MMSE) and maximum a posteriori (MAP) signal estimators given noisy signals. Both estimators are shown to be significantly more complicated than similar estimators developed earlier using the standard HMM. A feasible approximate MAP estimation approach in which the states of the signal and the signal itself are alternatively estimated using Viterbi decoding and Kalman filtering is also presented. >

Journal ArticleDOI
TL;DR: Results of the algorithm on various synthetic and natural textures clearly indicate the effectiveness of the approach to texture segmentation.
Abstract: The problem of region segmentation is examined and a new algorithm for maximum a posteriori (MAP) segmentation is introduced. The observed image is modeled as a composite of two processes: a high-level process that describes the various regions in the images and a low-level process that describes each particular region. A Gibbs-Markov random field model is used to describe the high-level process and a simultaneous autoregressive random field model is used to describe the low-level process. The MAP segmentation algorithm is formulated from the two models and a recursive implementation forthe algorithm is presented. Results of the algorithm on various synthetic and natural textures clearly indicate the effectiveness of the approach to texture segmentation.

Proceedings ArticleDOI
23 Mar 1992
TL;DR: The a posteriori probability for the location of bursts of noise additively superimposed on a Gaussian autoregressive (AR) process is derived and the maximum a posterioru (MAP) solution for noise burst position is obtained by using a simple search procedure, yielding the noise burst location corresponding to minimum probability of error.
Abstract: The a posteriori probability for the location of bursts of noise additively superimposed on a Gaussian autoregressive (AR) process is derived. The maximum a posteriori (MAP) solution for noise burst position is obtained by using a simple search procedure, yielding the noise burst location corresponding to minimum probability of error. This procedure finds application in digital audio processing, where clicks and scratches may be modeled as additive bursts of noise. The method permits accurate detection of these degradations and their subsequent replacement (interpolation). Experiments were carried out on both real audio data and synthetic AR processes, and comparisons are made with previous techniques. >

Proceedings ArticleDOI
A. Lopes1, E. Nezry, S. Goze, Ridha Touzi, G.A. Solaas 
26 May 1992
TL;DR: In this paper, a complex Gaussian-Gamma MAP (Maximum A Posteriori) fdter was developed for complex multiplicative speckle noise and for K-distributed data in the case of L separate correlated complex looks data.
Abstract: As the output of the SAR data processor is a complex number, speckle reduction should take profit of the additional information contained in the phase and in the complex correlation between the different images. In order to obtain high equivalent number of looks, we develop the Complex Gaussian-Gamma MAP (Maximum A Posteriori) fdter for complex multiplicative speckle noise and for K-distributed data in the case of L separate correlated complex looks data. This filter is a combination of the usual incoherent multilook processing (Doppler spectral domain) and of an adaptive spatial averaging (image spatial domain). The solution of the filter equation is very easy to handle and is the rigorous solution of the correlated looks MAP fdter.


Journal ArticleDOI
TL;DR: The maximum a posteriori (MAP) estimation technique is applied to the problem of restoring images distorted by noisy point spread functions and additive noise and the resulting MAP estimator is nonlinear and is obtained by numerically maximizing a conditional probability density function.
Abstract: The maximum a posteriori (MAP) estimation technique is applied to the problem of restoring images distorted by noisy point spread functions and additive noise. The resulting MAP estimator is nonlinear and is obtained by numerically maximizing a conditional probability density function. The energy nonnegativity constraint is incorporated in the optimization process. Although the deblurring results are slightly inferior to those obtained by applying the Wiener criterion, the advantage of the MAP estimator lies in its significant suppression of noise. >

Proceedings ArticleDOI
25 Oct 1992
TL;DR: In this paper, an algorithm for approximating the maximum a posteriori (MAP) estimate of transmission images is proposed, where attenuation correction factors can be formed from these images by reprojecting, sign-inverting and exponentiating them.
Abstract: An algorithm for approximating the maximum a posteriori (MAP) estimate of transmission images is proposed. Attenuation correction factors can be formed from these images by reprojecting, sign-inverting and exponentiating them. The authors evaluate the quality of emission images of the myocardium as quantified by the average bias and variance for attenuation correction factors formed from unprocessed transmission scans, smoothed transmission scans. MAP images, and maximum-likelihood (ML) images. The MAP and ML methods yield emission images with significantly lower bias and variance than the other two methods. Moreover, they are relatively insensitive to the correction for accidental coincidences. >