scispace - formally typeset
Search or ask a question

Showing papers on "Maximum a posteriori estimation published in 1995"


Journal ArticleDOI
TL;DR: This work exploits the fact that the marginal density can be expressed as the prior times the likelihood function over the posterior density, so that Bayes factors for model comparisons can be routinely computed as a by-product of the simulation.
Abstract: In the context of Bayes estimation via Gibbs sampling, with or without data augmentation, a simple approach is developed for computing the marginal density of the sample data (marginal likelihood) given parameter draws from the posterior distribution. Consequently, Bayes factors for model comparisons can be routinely computed as a by-product of the simulation. Hitherto, this calculation has proved extremely challenging. Our approach exploits the fact that the marginal density can be expressed as the prior times the likelihood function over the posterior density. This simple identity holds for any parameter value. An estimate of the posterior density is shown to be available if all complete conditional densities used in the Gibbs sampler have closed-form expressions. To improve accuracy, the posterior density is estimated at a high density point, and the numerical standard error of resulting estimate is derived. The ideas are applied to probit regression and finite mixture models.

1,954 citations


Journal ArticleDOI
TL;DR: Preliminary numerical testing of the algorithms on simulated data suggest that the convex algorithm and the ad hoc gradient algorithm are computationally superior to the EM algorithm.
Abstract: This paper reviews and compares three maximum likelihood algorithms for transmission tomography. One of these algorithms is the EM algorithm, one is based on a convexity argument devised by De Pierro (see IEEE Trans. Med. Imaging, vol.12, p.328-333, 1993) in the context of emission tomography, and one is an ad hoc gradient algorithm. The algorithms enjoy desirable local and global convergence properties and combine gracefully with Bayesian smoothing priors. Preliminary numerical testing of the algorithms on simulated data suggest that the convex algorithm and the ad hoc gradient algorithm are computationally superior to the EM algorithm. This superiority stems from the larger number of exponentiations required by the EM algorithm. The convex and gradient algorithms are well adapted to parallel computing. >

368 citations


15 Feb 1995
TL;DR: This study is a probabilistic approach to the measurement-to-track assignment problem, where measurements are not assigned to tracks as in traditional multi-hypothesis tracking algorithms; Instead, the probability that each measurement belongs to each track is estimated using a maximum a posteriori (MAP) method.
Abstract: : In a multitarget, multimeasurement environment, knowledge of the measurement-to-track assignments is typically unavailable to the tracking algorithm This study is a probabilistic approach to the measurement-to-track assignment problem Measurements are not assigned to tracks as in traditional multi-hypothesis tracking (MHT) algorithms; Instead, the probability that each measurement belongs to each track is estimated using a maximum a posteriori (MAP) method These measurement-to-track probability estimates are intrinsic to the multitarget tracker called the probabilistic multi-hypothesis tracking (PMHT) algorithm The PMHT algorithm is computationally practical because it requires neither enumeration of measurement-to-track assignments nor pruning The PMHT algorithm is an optimal MAP multitarget tracking algorithm (AN)

284 citations


Journal ArticleDOI
TL;DR: A theoretical framework for Bayesian adaptive training of the parameters of a discrete hidden Markov model and a semi-continuous HMM with Gaussian mixture state observation densities is presented and the proposed MAP algorithms are shown to be effective especially in the cases in which the training or adaptation data are limited.
Abstract: A theoretical framework for Bayesian adaptive training of the parameters of a discrete hidden Markov model (DHMM) and of a semi-continuous HMM (SCHMM) with Gaussian mixture state observation densities is presented. In addition to formulating the forward-backward MAP (maximum a posteriori) and the segmental MAP algorithms for estimating the above HMM parameters, a computationally efficient segmental quasi-Bayes algorithm for estimating the state-specific mixture coefficients in SCHMM is developed. For estimating the parameters of the prior densities, a new empirical Bayes method based on the moment estimates is also proposed. The MAP algorithms and the prior parameter specification are directly applicable to training speaker adaptive HMMs. Practical issues related to the use of the proposed techniques for HMM-based speaker adaptation are studied. The proposed MAP algorithms are shown to be effective especially in the cases in which the training or adaptation data are limited. >

93 citations


Journal ArticleDOI
TL;DR: This technique is multiple-resolution based, and relies on the conversion of speckle images with Rayleigh statistics to subsampled images with Gaussian statistics to reduce computation time, as well as allowing accurate parameter estimation for a probabilistic segmentation algorithm.

85 citations


Proceedings ArticleDOI
23 Oct 1995
TL;DR: An approximate ML estimator for the hyperparameters of a Gibbs prior which can be computed simultaneously with a maximum a posteriori (MAP) image estimate is described.
Abstract: We describe an approximate ML estimator for the hyperparameters of a Gibbs prior which can be computed simultaneously with a maximum a posteriori (MAP) image estimate. The algorithm is based on a mean field approximation technique through which multidimensional Gibbs distributions are approximated by a separable function equal to a product of one dimensional densities. We show how this approach can be used to simplify the ML estimation problem. We also show how the Gibbs-Bogoliubov-Feynman bound can be used to optimize the approximation for a restricted class of problems.

67 citations


Patent
28 Feb 1995
TL;DR: In this paper, the received L1 signal is correlated with a locally generated replica of the P-code, and passed through a bandpass filter having a bandwidth approximating the bandwidth of the unknown modulation code, and the decorrelated signals are then latched in such a way as to account for the differential ionospheric refraction of the L1 and L2 signals.
Abstract: A Global Positioning System (GPS) commercial receiver including a digital processor disposed to utilize the energy of both the L1 and L2 GPS satellite signals in order to derive an estimate of an unknown security code used to modulate the signals. In processing the signal energy from the received L1 and L2 signals in accordance with statistical Maximum A Posteriori (MAP) estimation theory, the received L1 signal is correlated with a locally generated replica of the P-code, and passed through a bandpass filter having a bandwidth approximating the bandwidth of the unknown modulation code. The received L2 signal is similarly correlated and filtered, and the decorrelated signals are then latched in such a way as to account for the differential ionospheric refraction of the L1 and L2 signals. The bandlimited L2 signal is used to produce quadrature error signals related to phase difference between the L2 signal and a locally generated L2 replica. The error signals are integrated over an integration period which approximates the bit period of the unknown code, with the resulting estimates of the bits of the unknown code being combined with corresponding L1 channel code bit estimates weighted by a factor proportional to the difference in received L1 and L2 signal power. The hyperbolic tangent of each combined W-code bit estimate is then computed, with the result being multiplied by one of the integrated error signals. The resulting control voltage is then used to adjust the locally generated L2 carrier phase.

63 citations


Journal ArticleDOI
TL;DR: It is proposed that texture segmentation is a part of the early visual system's overall strategy to infer surfaces of objects in a visual scene and the Bayesian inference paradigm is used to formulate the texture segmentations problem into a maximum a posteriori surface inference problem.

56 citations


Journal ArticleDOI
TL;DR: A recursive model-based maximum a posteriori (MAP) estimator that simultaneously estimates the displacement vector field (DVF) and the intensity field from a noisy-blurred image sequence and establishes a link between the two estimators.
Abstract: We develop a recursive model-based maximum a posteriori (MAP) estimator that simultaneously estimates the displacement vector field (DVF) and the intensity field from a noisy-blurred image sequence. Current motion-compensated spatio-temporal noise filters treat the estimation of the DVF as a preprocessing step. Generally, no attempt is made to verify the accuracy of these estimates prior to their use in the filter. By simultaneously estimating these two fields, we establish a link between the two estimators. It is through this link that the DVF estimate and its corresponding accuracy information are shared with the other intensity estimator, and vice versa. To model the DVF and the intensity field, we use coupled Gauss-Markov (CGM) models. A CGM model consists of two levels: an upper level, which is made up of several submodels with various characteristics, and a lower level or line field, which governs the transitions between the submodels. The CGM models are well suited for estimating the displacement and intensity fields since the resulting estimates preserve the boundaries between the stationary areas present in both fields. Detailed line fields are proposed for the modeling of these boundaries, which also take into account the correlations that exist between these two fields. A Kalman-type estimator results, followed by a decision criterion for choosing the appropriate set of line fields. Several experiments using noisy and noisy-blurred image sequences demonstrate the superior performance of the proposed algorithm with respect to prediction error and mean-square error. >

53 citations


Proceedings ArticleDOI
09 May 1995
TL;DR: The paper presents a fast and incremental speaker adaptation method called MAP/VFS, which combines maximum a posteriori (MAP) estimation, or in other words Bayesian learning, with vector field smoothing (VFS).
Abstract: The paper presents a fast and incremental speaker adaptation method called MAP/VFS, which combines maximum a posteriori (MAP) estimation, or in other words Bayesian learning, with vector field smoothing (VFS). The point is that MAP is an intra-class training scheme while VFS is an inter-class smoothing technique. This is a basic technique for on-line adaptation which will be important in constructing a practical speech recognition system. Speaker adaptation speed of the incremental MAP is experimentally shown to be significantly accelerated by the use of VFS in word-by-word adaptation. The recognition performance of MAP is consistently improved and stabilized by VFS. The word error reduction rate achieved in incrementally adapting a few words of sample data is about 22%.

45 citations


Journal ArticleDOI
TL;DR: The Cramer-Rao bound for unbiased dipole location estimation is derived under the assumption of a general head model parameterized by deterministic and stochastic parameters, and random variations in both the multiple sphere radii and the layer conductivities are shown to have the most impact on localization performance in high SNR regions.
Abstract: The Cramer-Rao bound for unbiased dipole location estimation is derived under the assumption of a general head model parameterized by deterministic and stochastic parameters. The expression thus characterizes fundamental limits on EEG dipole localization performance due to the effects of both model uncertainty and statistical measurement noise. Expressions are derived for the cases of multivariate Gaussian and gamma distribution priors, and examples are given to illustrate the derived bounds when the radii and conductivities of a four-concentric sphere head model are allowed to be random. The joint MAP estimate of location/model parameters is then examined as a means of achieving robustness to deviations from an ideal head model. Random variations in both the multiple sphere radii and the layer conductivities are shown, via the stochastic Cramer-Rao bounds and Monte Carlo simulation of the MAP estimator, to have the most impact on localization performance in high SNR regions, where finite sample effects are not the limiting factors. This corresponds most often to spatial regions that are close to the scalp electrodes. >

Journal ArticleDOI
TL;DR: This paper presents a novel approach to blind equalization (deconvolution), which is based on direct examination of possible input sequences, and does not rely on a model of the approximative inverse of the channel dynamics.
Abstract: This paper presents a novel approach to blind equalization (deconvolution), which is based on direct examination of possible input sequences. In contrast to many other approaches, it does not rely on a model of the approximative inverse of the channel dynamics. To start with, the blind equalization identifiability problem for a noise-free finite impulse response channel model is investigated. A necessary condition for the input, which is algorithm independent, for blind deconvolution is derived. This condition is expressed in an information measure of the input sequence. A sufficient condition for identifiability is also inferred, which imposes a constraint on the true channel dynamics. The analysis motivates a recursive algorithm where all permissible input sequences are examined. The exact solution is guaranteed to be found as soon as it is possible. An upper bound on the computational complexity of the algorithm is given. This algorithm is then generalized to cope with time-varying infinite impulse response channel models with additive noise. The estimated sequence is an arbitrary good approximation of the maximum a posteriori estimate. The proposed method is evaluated on a Rayleigh fading communication channel. The simulation results indicate fast convergence properties and good tracking abilities. >

Proceedings ArticleDOI
09 May 1995
TL;DR: A novel speech adaptation algorithm that enables adaptation even with a small amount of speech data and a higher phoneme recognition performance was obtained by using this algorithm than with individual methods, showing the superiority of the proposed algorithm.
Abstract: The paper proposes a novel speech adaptation algorithm that enables adaptation even with a small amount of speech data This is a unified algorithm of two efficient conventional speaker adaptation techniques, which are maximum a posteriori (MAP) estimation and transfer vector field smoothing (VFS) This algorithm is designed to avoid the weaknesses of both MAP and VFS A higher phoneme recognition performance was obtained by using this algorithm than with individual methods, showing the superiority of the proposed algorithm The phoneme recognition error rate was reduced from 220% to 191% using this algorithm for a speaker-independent model with seven adaptation phrases Furthermore, a priori knowledge concerning speaker characteristics was obtained for this algorithm by generating an initial HMM with the speech of a selected speaker cluster based on speaker similarity The adaptation using this initial model reduced the phoneme recognition error rate from 220% to 177%

Patent
05 Jul 1995
TL;DR: In this article, the mean vector of the corresponding reference phoneme model is estimated by a maximum a posteriori estimation method. And the adapted model is further smoothed by the vector field smoothing method.
Abstract: Training data is LPC analyzed to obtain a feature parameter vector sequence, which is subjected to Viterbi segmentation using reference phoneme models to separate phonemes. Each piece of phoneme data is used to estimate a mean vector of the corresponding reference phoneme model by a maximum a posteriori estimation method. The adapted phoneme model and the corresponding reference phoneme model are used to estimate a mean vector for an unadapted phoneme model through interpolation by a vector field smoothing method. Alternatively, the mean vector of the adapted phoneme model is further smoothed by the vector field smoothing method. By this, an adapted model is obtained which has, as its parameters, the mean vector obtained for each phoneme and other corresponding parameters.

Journal ArticleDOI
TL;DR: A maximum a posteriori (MAP) approach for iterative reconstruction based on a weighted least-squares conjugate gradient (WLS-CG) algorithm, which concludes that the MAP-CG algorithm requires 10%-25% of the processing time of EM techniques, and provides images of comparable or superior quality.
Abstract: We have derived a maximum a posteriori (MAP) approach for iterative reconstruction based on a weighted least‐squares conjugate gradient (WLS‐CG) algorithm. The WLS‐CG algorithm has been shown to have initial convergence rates up to 10× faster than the maximum‐likelihood expectation maximization (ML‐EM) algorithm, but WLS‐CG suffers from rapidly increasing image noise at higher iteration numbers. In our MAP‐CG algorithm, the increasing noise is controlled by a Gibbs smoothing prior, resulting in stable, convergent solutions. Our formulation assumes a Gaussian noise model for the likelihood function. When a linear transformation of the pixel space is performed (the ‘‘relaxation’’ acceleration method), the MAP‐CG algorithm obtains a low‐noise, stable solution (one that does not change with further iterations) in 10–30 iterations, compared to 100–200 iterations for MAP‐EM. Each iteration of MAP‐CG requires approximately the same amount of processing time as one iteration of ML‐EM or MAP‐EM. We show that the use of an initial image estimate obtained from a single iteration of the Chang method helps the algorithm to converge faster when acceleration is not used, but does not help when acceleration is applied. While both the WLS‐CG and MAP‐CG methods suffer from the potential for obtaining negative pixel values in the iterated image estimates, the use of the Gibbs prior substantially reduces the number of pixels with negative values and restricts them to regions of little or no activity. We use SPECT data from simulated hot‐sphere phantoms and from patient studies to demonstrate the advantages of the MAP‐CG algorithm. We conclude that the MAP‐CG algorithm requires 10%–25% of the processing time of EM techniques, and provides images of comparable or superior quality.

Journal ArticleDOI
TL;DR: A new criterion for classifying multispectral remote sensing images or textured images by using spectral and spatial information is proposed and a stepwise classification algorithm is derived.
Abstract: A new criterion for classifying multispectral remote sensing images or textured images by using spectral and spatial information is proposed. The images are modeled with a hierarchical Markov Random Field (MRF) model that consists of the observed intensity process and the hidden class label process. The class labels are estimated according to the maximum a posteriori (MAP) criterion, but some reasonable approximations are used to reduce the computational load. A stepwise classification algorithm is derived and is confirmed by simulation and experimental results. >

Proceedings ArticleDOI
20 Jun 1995
TL;DR: In simulations, it is shown that the MLM method performs better than the MAP estimator, and better than two standard color constancy algorithms, and may prove useful in other vision problems as well.
Abstract: Vision algorithms are often developed in a Bayesian framework. Two estimators are commonly used: maximum a posteriori (MAP), and minimum mean squared error (MMSE). We argue that neither is appropriate for perception problems. The MAP estimator makes insufficient use of structure in the posterior probability. The squared error penalty of the MMSE estimator does not reflect typical penalties. We describe a new estimator, which we call maximum local mass (MLM) [10, 26, 65], which integrates the local probability density. The MLM method is sensitive to local structure of the posterior probability, which MAP is not. The new method uses an optimality criterion that is appropriate for perception tasks: it finds the most probable approximately correct answer. For the case of low observation noise, we provide an efficient approximation. We apply this new estimator to color constancy. An unknown illuminant falls on surfaces of unknown colors. We seek to estimate both the illuminant spectrum and the surface spectra from photosensor responses which depend on the product of the unknown spectra. In simulations, we show that the MLM method performs better than the MAP estimator, and better than two standard color constancy algorithms. The MLM method may prove useful in other vision problems as well. >

Journal ArticleDOI
TL;DR: The multispectral model is used in a Bayesian algorithm for the restoration of color images, in which the resulting nonlinear estimates are shown to be quantitatively and visually superior to linear estimates generated by multichannel Wiener and least squares restoration.
Abstract: Multispectral images consist of multiple channels, each containing data acquired from a different band within the frequency spectrum. Since most objects emit or reflect energy over a large spectral bandwidth, there usually exists a significant correlation between channels. Due to often harsh imaging environments, the acquired data may be degraded by both blur and noise. Simply applying a monochromatic restoration algorithm to each frequency band ignores the cross-channel correlation present within a multispectral image. A Gibbs prior is proposed for multispectral data modeled as a Markov random field, containing both spatial and spectral cliques. Spatial components of the model use a nonlinear operator to preserve discontinuities within each frequency band, while spectral components incorporate nonstationary cross-channel correlations. The multispectral model is used in a Bayesian algorithm for the restoration of color images, in which the resulting nonlinear estimates are shown to be quantitatively and visually superior to linear estimates generated by multichannel Wiener and least squares restoration. >

Journal Article
TL;DR: An overview of the optimization of drug therapy is presented, with special reference to maximum a posteriori probability (MAP) Bayesian fitting, resulting in improved patient outcome by improved efficacy of therapy and a reduction of adverse reactions, and in reduced costs, mainly due to a reduce of hospitalization.
Abstract: Optimal drug therapy can only be achieved if a drug is given in the right dosage regimen. Therefore the dosage regimen needs to be optimized, using the available information of the drug, the patient, and his disease. The optimization of drug therapy comprises two major steps: First, the clinician should define explicit therapeutic goals for each patient individually. Second, a strategy to achieve these goals with the greatest possible precision should be chosen. An overview of the optimization of drug therapy is presented, with special reference to maximum a posteriori probability (MAP) Bayesian fitting. Drug dosage optimization requires 1. measurement of a performance index related to the therapeutic goal, generally one or more plasma concentration measurements, 2. population pharmacokinetic parameters, including mean values, standard deviations, covariances and information on the statistical distribution, and 3. reliable software for adaptive control strategy and optimal dosage regimen calculation. The benefit of optimal drug therapy by adaptive control using MAP Bayesian fitting has been proven, resulting in improved patient outcome by improved efficacy of therapy and a reduction of adverse reactions, and in reduced costs, mainly due to a reduction of hospitalization. Newer strategies might replace the MAP Bayesian fitting procedure, if their advantage has been demonstrated convincingly, and if reliable and user-friendly software is available.

Journal ArticleDOI
TL;DR: The estimation method can address a wide class of models, widely used in signal or image processing and it is shown that a non-parametric estimation of the probability density of the non-observed variables can be performed.

Journal ArticleDOI
Michael Lavine1
TL;DR: In this article, a method of approximating the likelihood function and showing that it provides conservative inferences for Bayesian inference is proposed. But this method is limited to the case where a prior distribution is available only for a vector of quantiles of a given distribution function.
Abstract: Let Z 1 ,..., Z n be a random sample from F, an uncertain one-dimensional distribution function, and suppose that a prior distribution is available only for θ, a vector of quantiles of F. Bayesian inference is difficult because the likelihood function is not fully specified. This paper considers a method of approximating the likelihood function and shows that it provides conservative inferences

Journal ArticleDOI
TL;DR: A new algorithm for the approximation of the maximum a posteriori (MAP) restoration of noisy images, considered in a Bayesian setting, which runs in polynomial time and is based on the coding of the colours.
Abstract: We propose a new algorithm for the approximation of the maximum a posteriori (MAP) restoration of noisy images. The image restoration problem is considered in a Bayesian setting. We assume as prior distribution multicolour Markov random fields on a graph whose main restriction is the presence of only pairwise site interactions. The noise is modelled as a Bernoulli field. Computing the mode of the posterior distribution is NP complete, i.e. can (very likely) be done only in a time exponential in the number of sites of the underlying graph. Our algorithm runs in polynomial time and is based on the coding of the colours. It produces an image with the following property: either a pixel is coloured with one of the possible colours or it is left blank. In the first case we prove that this is the colour of the site in the exact MAP restoration. The quality of the approximation is then measured by the number of sites being left blank. We assess the performance of the new algorithm by numerical experiments on the simple three-colour Potts model. More rigorously, we present a probabilistic analysis of the algorithm. The results indicate that the approximation is quite often sufficiently good for the interpretation of the image.

Journal ArticleDOI
TL;DR: In this article, two spectral/spatial scene segmentation algorithms, sequential maximum a posteriori (SMAP) and the extraction and classification of homogeneous objects (ECHO), were compared with traditional maximum likelihood (ML) estimation in a supervised classification of multispectral data.
Abstract: Sequential maximum a posteriori (SMAP) and the extraction and classification of homogeneous objects (ECHO), two spectral/spatial scene segmentation algorithms, were compared with traditional maximum likelihood (ML) estimation in a supervised classification of multispectral data. SMAP generalized better than both ECHO and ML. Significant differences were found in all mean class classification accuracies: SMAP>ECHO>ML.

Book ChapterDOI
03 Apr 1995
TL;DR: The recognition process is formalized as a labelling problem whose solution, defined as the maximum a posteriori estimate of a Markovian random field (MRF) is obtained using simulated annealing.
Abstract: This paper presents a project aiming at the automatic detection and recognition of the human cortical sulci in a 3D magnetic resonance image The two first steps of this project (automatic extraction of an attributed relational graph (ARG) representing the individual cortical topography, constitution of a database of labelled ARGs) are briefly described Then, a probabilistic structural model of the cortical topography is inferred from the database This model, which is a structural prototype whose nodes can split into pieces according to syntactic constraints, relies on several original interpretations of the inter-individual structural variability of the cortical topography This prototype is endowed with a random graph structure taking into account this anatomical variability The recognition process is formalized as a labelling problem whose solution, defined as the maximum a posteriori estimate of a Markovian random field (MRF), is obtained using simulated annealing

Journal ArticleDOI
TL;DR: A recursive model-based algorithm for obtaining the maximum a posteriori estimate of the displacement vector field (DVF) from successive image frames of an image sequence is presented and demonstrates the superior performance of the proposed algorithm with respect to prediction error, interpolation error, and robustness to noise.
Abstract: A recursive model-based algorithm for obtaining the maximum a posteriori (MAP) estimate of the displacement vector field (DVF) from successive image frames of an image sequence is presented. To model the DVF, we develop a nonstationary vector field model called the vector coupled Gauss-Markov (VCGM) model. The VCGM model consists of two levels: an upper level, which is made up of several submodels with various characteristics, and a lower level or line process, which governs the transitions between the submodels. A detailed line process is proposed. The VCGM model is well suited for estimating the DVF since the resulting estimates preserve the boundaries between the differently moving areas in an image sequence. A Kalman type estimator results, followed by a decision criterion for choosing the appropriate line process. Several experiments demonstrate the superior performance of the proposed algorithm with respect to prediction error, interpolation error, and robustness to noise. >


Journal ArticleDOI
17 Sep 1995
TL;DR: This paper proposes an efficient modified MAP algorithm for obtaining P/sub c/ for the outputs of convolutional inner decoders for the purposes of soft decision decoding.
Abstract: The reliability measure for a decoded symbol is the probability P/sub c/ that the symbol is correct or the probability of error P/sub e/=1-P/sub c/. Such quantities can be obtained by the symbol-by-symbol MAP (maximum a posteriori probability) algorithm. Unfortunately this algorithm is computationally inefficient. A soft output Viterbi algorithm (SOVA) can provide an estimate of P/sub e/ which is accurate only for large SNR. This paper proposes an efficient modified MAP algorithm for obtaining P/sub c/ for the outputs of convolutional inner decoders. The outer decoder uses P/sub c/ to perform soft decision decoding by choosing a codeword which maximizes the maximum likelihood (ML) metric. Decoding based on this ML metric is referred to as generalised soft decision decoding since it includes the Euclidean metric on AWGN channels and binary memoryless channels as special cases.

Journal ArticleDOI
TL;DR: In this paper, the posterior distribution of the vector of regression coefficients is obtained, as well as the predictive distribution and a Bayes estimate of the error density, and a new approximation method is described.
Abstract: Bayes methods are provided for a multiple linear regression model in which the error terms have densities that are symmetric and unimodal at zero, but whose form is otherwise unknown. The posterior distribution of the vector of regression coefficients is obtained, as well as the predictive distribution and a Bayes estimate of the error density. A new approximation method isdescribed. A set of real data with outliers and a set of simulated data are used to compare this method to parametric methods and to an existing Monte Carlo approach.

Proceedings ArticleDOI
20 Jun 1995
TL;DR: The approach consists of extending a recent iterative method of estimation, called iterative conditional estimation (ICE) to a hierarchical Markovian model, and proposing unsupervised image classification algorithms using a hierarchical model.
Abstract: The paper deals with the problem of unsupervised classification of images modeled by Markov random fields (MRF). If the model parameters are known then we have various methods to solve the segmentation problem (simulated annealing, ICM, etc...). However, when they are not known, the problem becomes more difficult. One has to estimate the hidden label field parameters from the only observable image. Our approach consists of extending a recent iterative method of estimation, called iterative conditional estimation (ICE) to a hierarchical Markovian model. The idea resembles the estimation-maximization (EM) algorithm as we recursively look at the maximum a posteriori (MAP) estimate of the label field given the estimated parameters then we look at the maximum likelihood (ML) estimate of the parameters given a tentative labeling obtained at the previous step. We propose unsupervised image classification algorithms using a hierarchical model. The only parameter supposed to be known is the number of regions, all the other parameters are estimated. The presented algorithms have been implemented on a Connection Machine CM200. Comparative tests have been done on noisy synthetic and real images (remote sensing). >

Book ChapterDOI
11 Dec 1995
TL;DR: A new and general segmentation algorithm involving 3D adaptive K-Means clustering in a multiresolution wavelet basis is proposed and demonstrated via application to phantom images as well as MR brain scans.
Abstract: Segmentation of MR brain scans has received an enormous amount of attention in the medical imaging community over the past several years. In this paper we propose a new and general segmentation algorithm involving 3D adaptive K-Means clustering in a multiresolution wavelet basis. The voxel image of the brain is segmented into five classes namely, cerebrospinal fluid, gray matter, white matter, bone and background (remaining pixels). The segmentation problem is formulated as a maximum a posteriori (MAP) estimation problem wherein, the prior is assumed to be a Markov Random Field (MRF). The MAP estimation is achieved using an iterated conditional modes technique (ICM) in wavelet basis. Performance of the segmentation algorithm is demonstrated via application to phantom images as well as MR brain scans.