scispace - formally typeset
Search or ask a question

Showing papers on "Maximum a posteriori estimation published in 1989"



Journal ArticleDOI
TL;DR: It is proven that the parameter estimates and the segmentations converge in distribution to the ML estimate of the parameters and the MAP segmentation with those parameter estimates, respectively.
Abstract: An adaptive segmentation algorithm is developed which simultaneously estimates the parameters of the underlying Gibbs random field (GRF)and segments the noisy image corrupted by additive independent Gaussian noise. The algorithm, which aims at obtaining the maximum a posteriori (MAP) segmentation is a simulated annealing algorithm that is interrupted at regular intervals for estimating the GRF parameters. Maximum-likelihood (ML) estimates of the parameters based on the current segmentation are used to obtain the next segmentation. It is proven that the parameter estimates and the segmentations converge in distribution to the ML estimate of the parameters and the MAP segmentation with those parameter estimates, respectively. Due to computational difficulties, however, only an approximate version of the algorithm is implemented. The approximate algorithm is applied on several two- and four-region images with different noise levels and with first-order and second-order neighborhoods. >

400 citations


Journal ArticleDOI
TL;DR: The Confidence Profile Method is a Bayesian method for adjusting and combining pieces of evidence to estimate parameters, such as the effect of health technologies on health outcomes, illustrated with an analysis of the effectof a thrombolytic agent on the difference in probability of 1-year survival after a heart attack.
Abstract: The Confidence Profile Method is a Bayesian method for adjusting and combining pieces of evidence to estimate parameters, such as the effect of health technologies on health outcomes. The information in each piece of evidence is captured in a likelihood function that gives the likelihood of the observed results of the evidence as a function of possible values of the parameter. A posterior distribution is calculated from Bayes formula as the product of the likelihood function and a prior distribution. Multiple pieces of evidence are incorporated by successive applications of Bayes' formula. Pieces of evidence are adjusted for biases to internal or external validity by modeling the biases and deriving “adjusted” likelihood functions that incorporate the models. Likelihood functions have been derived for one-, two- and multi-arm prospective studies; 2 × 2, 2 × n and matched case-control studies, and cross-sectional studies. Biases that can be incorporated in likelihood functions include crossover in controll...

100 citations


Journal ArticleDOI
TL;DR: Comparisons of the a priori uniform and nonuniform Bayesian algorithms to the maximum-likelihood algorithm are carried out using computer-generated noise-free and Poisson randomized projections.
Abstract: A method that incorporates a priori uniform or nonuniform source distribution probabilistic information and data fluctuations of a Poisson nature is presented. The source distributions are modeled in terms of a priori source probability density functions. Maximum a posteriori probability solutions, as determined by a system of equations, are given. Interactive Bayesian imaging algorithms for the solutions are derived using an expectation maximization technique. Comparisons of the a priori uniform and nonuniform Bayesian algorithms to the maximum-likelihood algorithm are carried out using computer-generated noise-free and Poisson randomized projections. Improvement in image reconstruction from projections with the Bayesian algorithm is demonstrated. Superior results are obtained using the a priori nonuniform source distribution. >

91 citations


Journal ArticleDOI
TL;DR: An estimation approach is described for three-dimensional reconstruction from line integral projections using incomplete and very noisy data and a suboptimal hierarchical algorithm is described whose individual steps are locally optimal and are combined to satisfy a global optimality criterion.
Abstract: An estimation approach is described for three-dimensional reconstruction from line integral projections using incomplete and very noisy data. Generalized cylinders parameterized by stochastic dynamic models are used to represent prior knowledge about the properties of objects of interest in the probed domain. The object models, a statistical measurement model, and the maximum a posteriori probability performance criterion are combined to reformulate the reconstruction problem as a computationally challenging nonlinear estimation problem. For computational feasibility, a suboptimal hierarchical algorithm is described whose individual steps are locally optimal and are combined to satisfy a global optimality criterion. The formulation and algorithm are restricted to objects whose center axis is a single-valued function of a fixed spatial coordinate. Simulation examples demonstrate accurate reconstructions with as few as four views in a 135 degrees sector, at an average signal-to-noise ratio of 3.3. >

62 citations


Proceedings Article
01 Jan 1989
TL;DR: It is shown here that word recognition performance for a simple discrete density HMM system appears to be somewhat better when MLP methods are used to estimate the emission probabilities.
Abstract: We are developing a phoneme based, speaker-dependent continuous speech recognition system embedding a Multilayer Perceptron (MLP) (i.e., a feedforward Artificial Neural Network), into a Hidden Markov Model (HMM) approach. In [Bourlard & Wellekens], it was shown that MLPs were approximating Maximum a Posteriori (MAP) probabilities and could thus be embedded as an emission probability estimator in HMMs. By using contextual information from a sliding window on the input frames, we have been able to improve frame or phoneme classification performance over the corresponding performance for Simple Maximum Likelihood (ML) or even MAP probabilities that are estimated without the benefit of context. However, recognition of words in continuous speech was not so simply improved by the use of an MLP, and several modifications of the original scheme were necessary for getting acceptable performance. It is shown here that word recognition performance for a simple discrete density HMM system appears to be somewhat better when MLP methods are used to estimate the emission probabilities.

61 citations


Proceedings ArticleDOI
01 Nov 1989
TL;DR: A Markov random field model-based approach to automated image interpretation is described and demonstrated as a region-based scheme and provides a systematic method for organizing and representing domain knowledge through the clique functions of the pdf of the underlying MRF.
Abstract: In this paper, a Markov random field (MRF) model-based approach to automated image interpretation is described and demonstrated as a region-based scheme. In this approach, an image is first segmented into a collection of disjoint regions which form the nodes of an adjacency graph. Image interpretation is then achieved through assigning object labels, or interpretations, to the segmented regions, or nodes, using domain knowledge, extracted feature measurements and spatial relationships between the various regions. The interpretation labels are modeled as a MRF on the corresponding adjacency graph and the image interpretation problem is formulated as a maximum a posteriori (MAP) estimation rule. Simulated annealing is used to find the best realization, or optimal MAP interpretation. Through the MRF model, this approach also provides a systematic method for organizing and representing domain knowledge through the clique functions of the pdf of the underlying MRF. Results of image interpretation experiments performed on synthetic and real-world images using this approach are described and appear promising.

45 citations


Journal ArticleDOI
01 Nov 1989
TL;DR: The authors present a parallel pyramid implementation of the line search conjugate gradient algorithm for minimizing the cost function in low-level vision problems and derive a deterministic algorithm based on the GNC formulation to obtain a near-optimal maximum a posteriori probability estimate of images corrupted by additive Gaussian noise.
Abstract: The authors present a parallel pyramid implementation of the line search conjugate gradient algorithm for minimizing the cost function in low-level vision problems. By viewing the global cost function as a Gibbs energy function, it is possible to compute the gradients, inner products, and optimal-step size efficiently using the pyramid. Implementation of this algorithm for shape-from-shading results in a multiresolution conjugate gradient algorithm. The robustness and efficiency of the algorithm are demonstrated for edge detection using the graduated nonconvexity (GNC) algorithm. This formulation is also applied to image estimation based on Markov models. A compound model for the original image is defined that consists of a 2D noncausal Gauss-Markov random field to represent the homogeneous regions and a line process to represent the discontinuities. A deterministic algorithm based on the GNC formulation is derived to obtain a near-optimal maximum a posteriori probability estimate of images corrupted by additive Gaussian noise. >

29 citations


Proceedings ArticleDOI
10 Jul 1989
TL;DR: In this paper, a method for finding curves in digital images with speckle noise is described, which differs from standard linear convolutions followed by thresholds in that it explicitly allows curvature in the features.
Abstract: A method for finding curves in digital images with speckle noise is described. The solution method differs from standard linear convolutions followed by thresholds in that it explicitly allows curvature in the features. Maximum a posteriori (MAP) estimation is used, together with statistical models for the speckle noise and for the curve-generation process, to find the most probable estimate of the feature, given the image data. The estimation process is first described in general terms. Then, incorporation of the specific neighborhood system and a multiplicative noise model for speckle allows derivation of the solution, using dynamic programming, of the estimation problem. The detection of curvilinear features is considered separately. The detection results allow the determination of the minimal size of detectable feature. Finally, the estimation of linear features, followed by a detection step, is shown for computer-simulated images and for a SAR image of sea ice.

25 citations


Proceedings ArticleDOI
23 May 1989
TL;DR: The authors describe the development of a deterministic algorithm for obtaining the global maximum a posteriori probability (MAP) estimate from an image corrupted by additive Gaussian noise that finds the global MAP estimate in a small number of iterations.
Abstract: The authors describe the development of a deterministic algorithm for obtaining the global maximum a posteriori probability (MAP) estimate from an image corrupted by additive Gaussian noise. The MAP algorithm requires the probability density function of the original undegraded image and the corrupting noise. It is assumed that the original image is represented by a compound model consisting of a 2-D noncausal Gaussian-Markov random field (GMRF) to represent the homogeneous regions and a line process model to represent the discontinuities. The MAP algorithm is written in terms of the compound GMRF model parameters. The solution to the MAP equations is realized by a deterministic relaxation algorithm that is an extension of the graduated nonconvexity (GNC) algorithm and finds the global MAP estimate in a small number of iterations. As a byproduct, the line process configuration determined by the MAP estimate produces an accurate edge map without any additional cost. Experimental results are given to illustrate the usefulness of the method. >

24 citations


Proceedings ArticleDOI
23 May 1989
TL;DR: The authors address the problem of motion detection in an image sequence from the variations in time of the intensity distribution by the joint treatment of the detection of temporal changes and the reconstruction of mobile object masks according to a probabilistic formulation.
Abstract: The authors address the problem of motion detection in an image sequence from the variations in time of the intensity distribution. The problem is not limited to change detection but encompasses the recovery of the projections of moving areas in the image. The approach is characterized by the joint treatment of the detection of temporal changes and the reconstruction of mobile object masks according to a probabilistic formulation. More formally, spatio-temporal contextual information is introduced through Markovian models, using Gibbs distributions defined on a spatio-temporal neighborhood system. Then the problem at hand is stated as a statistical labeling one. To decide whether or not a point belongs to a moving area is equivalent to assigning to it a given label. A solution to this labeling problem is formulated according to the maximum a posteriori (MAP) criterion. Experiments with a real image sequence have been carried out. >

Journal ArticleDOI
TL;DR: The authors deal with the problem of deconvolution of Bernoulli-Gaussian random processes observed through linear systems, which corresponds to situations frequently encountered in areas such as geophysics, ultrasonic imaging, and nondestructive evaluation.
Abstract: The authors deal with the problem of deconvolution of Bernoulli-Gaussian random processes observed through linear systems. This corresponds to situations frequently encountered in areas such as geophysics, ultrasonic imaging, and nondestructive evaluation. Deconvolution of such signals is a detection-estimation problem that does not allow purely linear data processing, and the nature of the difficulties greatly depends on the type of representation chosen for the linear system. A MA degenerate state-space representation is used. It presents interesting algorithmic properties and simplifies implementation problems. To obtain a globally recursive procedure, a detection step is inserted in an estimation loop by Kalman filtering. Two recursive detectors based on maximum a posteriori and maximum-likelihood criteria, respectively, are derived and compared. >

Journal ArticleDOI
TL;DR: A three-state image model whose elements are named object, background, and new-scene after their representation of the corresponding structural situation is proposed and utilized in a background luminance updating algorithm based on a maximum a posteriori probability.
Abstract: The author proposes a three-state image model whose elements are named object, background, and new-scene after their representation of the corresponding structural situation. An interframe state transition diagram controlled by luminance differences is introduced. The model is then utilized in a background luminance updating algorithm based on a maximum a posteriori probability. Applications of the model and the updating algorithm in predictive video coding at low bit rate (64 kb/s and less) are suggested. >

Proceedings ArticleDOI
23 May 1989
TL;DR: An algorithm is proposed that estimates displacement vectors and segments each frame into three regions: unchanged, predictable, and unpredictable, and the segmentation result is used for the adaptation of the causal window and the initial estimate, both essential estimation information.
Abstract: An algorithm is proposed that estimates displacement vectors and segments each frame into three regions: unchanged, predictable, and unpredictable. The segmentation result is used for the adaptation of the causal window and the initial estimate, both essential estimation information. The algorithm consists of a two-stage detector where the stages are separated by a Wiener-based displacement estimator. The detectors are efficient, suboptimal, maximum a posteriori estimators of special first-order 2-D Markov random-field models. The detectors are shown to have a substantially improved performance compared with detectors previously used, and the prediction error is reduced compared with existing schemes. Initial experiments indicate good coding efficiency. >

Proceedings ArticleDOI
23 May 1989
TL;DR: A maximum a posteriori approach for enhancing speech signals which have been degraded by statistically independent additive noise is proposed, based upon statistical modeling of the clean speech signal and the noise process using long training sequences from the two processes.
Abstract: A maximum a posteriori approach for enhancing speech signals which have been degraded by statistically independent additive noise is proposed. The approach is based upon statistical modeling of the clean speech signal and the noise process using long training sequences from the two processes. Hidden Markov models (HMMs) with mixtures of Gaussian autoregressive (AR) output probability distributions are used to model the clean speech signal. A low-order Gaussian AR model is used for the wideband Gaussian noise considered here. The parameter set of the HMM is estimated using the Baum or the EM (estimation-maximization) algorithm. The enhancement of the noisy speech is done by means of reestimation of the clean speech waveform using the EM algorithm. An approximate improvement of 4.0-6.0 dB in signal-to-noise ratio (SNR) is achieved at 10 dB input SNR. >

Proceedings ArticleDOI
08 May 1989
TL;DR: The author discusses maximum a posteriori data-detection techniques for signals corrupted by one- and two-dimensional intersymbol interference and additive noise, and dynamic programming in the form of a Viterbi algorithm is applied to the problem of optimal data retrieval.
Abstract: For increased track densities or in optical storage media (high areal density) crosstalk is observed between the tracks, with two-dimensional dispersion. The author discusses maximum a posteriori data-detection techniques for signals corrupted by one- and two-dimensional intersymbol interference and additive noise. The distortions are modeled by Markov processes, and dynamic programming in the form of a Viterbi algorithm is applied to the problem of optimal data retrieval. Aspects such as detection performance and implementation considerations are discussed. >

Journal ArticleDOI
I. Hoeschele1
TL;DR: In this article, it was shown that the likelihood or posterior density function is always unimodal for this article and a Bayesian method but sometimes bimodal over the permissible parameter space.
Abstract: maximum Likelihood (ML), Restricted Maximum Likelihood (REML) and Bayesian methods are often preferred over other methods for estimating variance components in animal breeding. Iterative computing stategies are required for obtaining estimates with unbalanced data and models with at least two variance components. If iteration converges and the converged value is within the parameter space, it is commonly accepted as the “estimate”. However, if the likelihood or posterior density function is not unimodal, the converged value may correspond to a local but not global maximum, and not be the desired estimate. Bimodal likelihood functions have been found in ML. Analytical proofs and numerical results are presented for two–variance–component models showing that the likelihood or posterior density function is always unimodal for REML and a Bayesian method but sometimes bimodal over the permissible parameter space for ML and another Bayesian method.

Book ChapterDOI
01 Jan 1989
TL;DR: In this paper, a Bayesian approach with maximum entropy priors is proposed to solve an integral equation which arises in various image restoration and reconstruction problems, and the solution may be obtained by minimizing a criterion in which the structural entropy of the image is used as a particular choice of a regularization functional.
Abstract: In this paper we propose a Bayesian approach with Maximum Entropy (ME) priors to solve an integral equation which arises in various image restoration and reconstruction problems. Our contributions in this paper are the following: i) We discuss the a priori probability distributions which are deduced from different a priori constraints when the principle of ME is used. ii) When the a priori knowledge is only the noise covariance matrix and the image total intensity, and when the maximum a posteriori (MAP) is chosen as the decision rule to determine the values of image pixels, we show that the solution may be obtained by minimizing a criterion in which the structural entropy of the image is used as a particular choice of a regularization functional. The discussion is illustrated with some simulated results.

Proceedings ArticleDOI
13 Dec 1989
TL;DR: A computational algorithm is presented for obtaining the maximum a posteriori estimates of the states of a stochastic dynamical system by programming a neural network, and appears to be useful for handling state estimation problems arising in real-world applications.
Abstract: A computational algorithm is presented for obtaining the maximum a posteriori estimates of the states of a stochastic dynamical system by programming a neural network. It is well known that for real-time control implementations, especially in such applications as multitarget tracking and vision-guided robots, the computational requirements for solving such state estimation problems attain particular significance, and parallel processing techniques are highly useful. The performance of the algorithm has been investigated by conducting several numerical experiments. It appears to be useful for handling state estimation problems arising in real-world applications. >

Proceedings ArticleDOI
23 May 1989
TL;DR: A hierarchical reconstruction algorithm for use in noisy and limited-angle or sparse-angle tomography, which estimates the object's mass, center of mass, and convex hull from the available projections, and uses this information, along with fundamental mathematical constraints, to estimate a full set of smoothed projections.
Abstract: The authors describe and demonstrate a hierarchical reconstruction algorithm for use in noisy and limited-angle or sparse-angle tomography. The algorithm estimates the object's mass, center of mass, and convex hull from the available projections, and uses this information, along with fundamental mathematical constraints, to estimate a full set of smoothed projections. The mass and center of mass are estimated using a maximum-likelihood (ML) estimator derived from the principles of consistency of the Radon transform. The convex hull estimate is produced by first estimating the positions of support lines of the object from each available projection and then estimating the overall convex hull using ML or maximum a posteriori (MAP) techniques. The position of two support lines from a single projection is estimated using either a generalized likelihood ratio technique for estimating jumps in linear systems or a support-width penalty method that uses Akaike's model-order estimation technique. >

Proceedings ArticleDOI
03 Apr 1989
TL;DR: In this article, the original image and the additive noise are assumed to be zeroniean Gaussian random processes and maximum likelihood estimation is used to find those unknown parameters.
Abstract: 'I'his paper deals with simultaneous identification and restoration of images. By idqitification, we mean the estimation of the parameters characterizing the degradation mechanisms. The original image and the additive noise are assumed to be zeroniean Gaussian random processes. Their autocovariance ma trices arc unknown parameters. Blurring is part of the degradation. It is specified by its point spread function, which is also an unknown parameter to be estimated. Maximum likelihood estimation is used to find those unknown parameters. In turn, the EM algorithm is used to find the maximum likelihood estimates. In applying the EM algorithm, the observed image is treated as the incomplete data, which turns out to be a linear transformation of the complete data. Diffefent choices of complete data are investigated. Under the assumptioil that the image covariance and distortion matrices are circulant, the estimation of the unknown parameters becomes feasible. Explicit iterative expressions are derived for the estimation. The restored image is computed in the E-step of the EM algorithm.

Journal ArticleDOI
TL;DR: In this article, a population which can be divided into two sub-populations each representing a different cause of failure, is considered and a comparison of the Bayes estimation and the maximum likelihood estimation is made through Monte Carlo simulation.

Journal ArticleDOI
TL;DR: It is shown how the dynamic programming methodology can be extended to estimate both the nuisance parameters and the Markov sequence, using a combined maximum-likelihood and MAP framework, and is efficient relative to other possible solutions.
Abstract: The dynamic programming approach for maximum a posteriori (MAP) estimation of Markov sequences is frequently proposed for problems in control theory, communications, and signal processing. It is usually assumed that the observation sequence is a perfectly known function of the Markov sequence of interest, except for some additive noise with known statistics. However, often the observation is not only a function of the Markov sequence but also of a vector of unknown nuisance parameters. It is shown how the dynamic programming methodology can be extended to estimate both the nuisance parameters and the Markov sequence, using a combined maximum-likelihood and MAP framework. The technique is efficient relative to other possible solutions. The problem of detecting and tracking moving targets observed by imaging sensors is used to demonstrate the efficiency of the procedure. >

Proceedings ArticleDOI
23 May 1989
TL;DR: A compound Gauss-Markov random field (CGMRF) that models nonstationarity in images is proposed for the segmentation and restoration of blurred and noisy images and results on segmenting and restoring a noisy image are presented.
Abstract: A compound Gauss-Markov random field (CGMRF) that models nonstationarity in images is proposed for the segmentation and restoration of blurred and noisy images. At the top level of the CGMRF, a label process, which segments the image into K regions, is modeled by a Gibbs random field (GRF). At the bottom level, pixel intensities in each region are modeled by a stationary, noncausal GMRF. maximum a posteriori (MAP) estimates of segmented and restored images are obtained by maximizing their joint a posteriori distribution, using model parameters identified from the noisy image. A stochastic relaxation method is used for optimization. For faster convergence, deterministic relaxation is implemented. Experimental results on segmenting and restoring a noisy image are presented. >

Proceedings ArticleDOI
23 May 1989
TL;DR: The authors present a single formulation for constrained imaging that fuses the problem of joint estimation of the continuous parameters using MAP (maximum a posteriori) and conditional-mean estimators with that of performing generalized Bayes hypothesis testing for the symbolic imaging variables.
Abstract: The authors present a single formulation for constrained imaging that fuses the problem of joint estimation of the continuous parameters using MAP (maximum a posteriori) and conditional-mean estimators with that of performing generalized Bayes hypothesis testing for the symbolic imaging variables. Coupling this with recent results on representing regular grammars via Gibbs' distributions makes it possible to incorporate into a single hierarchical framework the stochastic constraints relevant to continuous-valued parameters as well as language-theoretic constraints on the symbolic variables. The authors also present a method for performing the required computations on a massively parallel architecture, which makes it possible to update every variable at every level in the hierarchy in parallel. The conclusions obtained are supported with results for a Poisson imaging problem computed on a DAP-500 massively parallel processor with 1024 processing elements. >

Journal ArticleDOI
TL;DR: A separation principle is developed, which indicates that one can estimate multichannel Gaussian amplitudes and Bernoulli events separately, and approaches for estimation of these quantities are discussed.
Abstract: General problems and solutions are described for maximum a posteriori estimation of multichannel Bernoulli-Gaussian sequences, which are inputs to a linear discrete-time multivariable system. The authors first develop a separation principle, which indicates that one can estimate multichannel Gaussian amplitudes and Bernoulli events separately. They then discuss approaches for estimation of these quantities. >

Journal ArticleDOI
TL;DR: In this article, a Bayesian approach is used to estimate the success probability of a random sample of size k from a binomial distribution with parameters n and θ, where both θ and n are unknown.
Abstract: Let S1, …, Sk be a random sample of size k from a binomial distribution with parameters n and θ, the success probability. We are interested in estimating n when both θ and n are unknown. We use a Bayesian approach to estimating n. Even though, from the Bayesian viewpoint, the number of trials is a discrete random variable N, we take a continuous prior distribution for N. A justification for using this continuous prior distribution is given. Assuming the quadratic loss function, the mean of the posterior distribution of N is the Bayes estimator of n. The Bayes estimator does not possess a closed form. It is evaluated by using the Laguerre-Gauss quadrature. A Monte Carlo study of the Bayes, the stable versions of the method of moments and the maximum likelihood (Olkin, Petkau and Zidek, 1981) and the Carroll-Lombard (1985) estimators are made. The numerical work indicates that the Bayes estimator is a stable estimator and, in some cases, is superior to other n estimators in terms of the mean squared error a...

Journal ArticleDOI
TL;DR: In this paper, the authors consider a system of components whose lifetimes are independent and identically distributed as exponential random variables and derive the Bayes estimator of the reliability of the system using the Monte Carlo simulation and compare the estimates of the parallel and series systems for specific choices of the lifetime distributions and the environment.

Journal ArticleDOI
TL;DR: In this article, a Monte Carlo simulation is performed in order to compare the Bayes and maximum likelihood estimators of the parameters and the reliability function of an item, and it is shown that the estimators are better than the Maximum-Likelihood estimators in the sense of smaller root mean squared errors.