scispace - formally typeset
Search or ask a question

Showing papers on "Maximum a posteriori estimation published in 2001"


Journal ArticleDOI
TL;DR: This work develops a maximum a posteriori probability (MAP) estimation approach for interferometric radar techniques, and derives an algorithm that approximately maximizes the conditional probability of its phase-unwrapped solution given observable quantities such as wrapped phase, image intensity, and interferogram coherence.
Abstract: Interferometric radar techniques often necessitate two-dimensional (2-D) phase unwrapping, defined here as the estimation of unambiguous phase data from a 2-D array known only modulo 2pi rad. We develop a maximum a posteriori probability (MAP) estimation approach for this problem, and we derive an algorithm that approximately maximizes the conditional probability of its phase-unwrapped solution given observable quantities such as wrapped phase, image intensity, and interferogram coherence. Examining topographic and differential interferometry separately, we derive simple, working models for the joint statistics of the estimated and the observed signals. We use generalized, nonlinear cost functions to reflect these probability relationships, and we employ nonlinear network-flow techniques to approximate MAP solutions. We apply our algorithm both to a topographic interferogram exhibiting rough terrain and layover and to a differential interferogram measuring the deformation from a large earthquake. The MAP solutions are complete and are more accurate than those of other tested algorithms.

642 citations


Proceedings Article
01 Jan 2001
TL;DR: By setting the mixing coefficients to maximize the marginal log-likelihood, unwanted components can be suppressed, and the appropriate number of components for the mixture can be determined in a single training run without recourse to crossvalidation.
Abstract: Mixture models, in which a probability distribution is represented as a linear superposition of component distributions, are widely used in statistical modelling and pattern recognition. One of the key tasks in the application of mixture models is the determination of a suitable number of components. Conventional approaches based on cross-validation are computationally expensive, are wasteful of data, and give noisy estimates for the optimal number of components. A fully Bayesian treatment, based on Markov chain Monte Carlo methods for instance, will return a posterior distribution over the number of components. However, in practical applications it is generally convenient, or even computationally essential, to select a single, most appropriate model. Recently it has been shown, in the context of linear latent variable models, that the use of hierarchical priors governed by continuous hyper-parameters whose values are set by type-II maximum likelihood, can be used to optimize model complexity. In this paper we extend this framework to mixture distributions by considering the classical task of density estimation using mixtures of Gaussians. We show that, by setting the mixing coefficients to maximize the marginal log-likelihood, unwanted components can be suppressed, and the appropriate number of components for the mixture can be determined in a single training run without recourse to crossvalidation. Our approach uses a variational treatment based on a factorized approximation to the posterior distribution.

419 citations


Proceedings ArticleDOI
07 Jul 2001
TL;DR: The DDM-CMC paradigm provides a unifying framework where the role of existing segmentation algorithms, such as; edge detection, clustering, region growing, split-merge, SNAKEs, region competition, are revealed as either realizing Markov chain dynamics or computing importance proposal probabilities.
Abstract: This paper presents a computational paradigm called Data Driven Markov Chain Monte Carlo (DDMCMC) for image segmentation in the Bayesian, statistical framework. The paper contributes to image segmentation in three aspects. Firstly, it designs effective and well balanced Markov Chain dynamics to explore the solution space and makes the split and merge process reversible at a middle level vision formulation. Thus it achieves globally optimal solution independent of initial segmentations. Secondly, instead of computing a single maximum a posteriori solution, it proposes a mathematical principle for computing multiple distinct solutions to incorporates intrinsic ambiguities in image segmentation. A k-adventurers algorithm is proposed for extracting distinct multiple solutions from the Markov chain sequence. Thirdly, it utilizes data-driven (bottom-up) techniques, such as clustering and edge detection, to compute importance proposal probabilities, which effectively drive the Markov chain dynamics and achieve tremendous speedup in comparison to traditional jump-diffusion method. Thus DDM-CMC paradigm provides a unifying framework where the role of existing segmentation algorithms, such as; edge detection, clustering, region growing, split-merge, SNAKEs, region competition, are revealed as either realizing Markov chain dynamics or computing importance proposal probabilities. We report some results on color and grey level image segmentation in this paper and refer to a detailed report and a web site for extensive discussion.

313 citations


Book ChapterDOI
01 Jan 2001
TL;DR: In this article, a program for maximum likelihood estimation of general stable parameters is described, and the Fisher information matrix is computed, making large sample estimation of stable parameters a practical tool.
Abstract: A program for maximum likelihood estimation of general stable parameters is described. The Fisher information matrix is computed, making large sample estimation of stable parameters a practical tool. In addition, diagnostics are developed for assessing the stability of a data set. Applications to simulated data, stock price data, foreign exchange rate data, radar data, and ocean wave energy are presented.

300 citations


Journal ArticleDOI
TL;DR: A structural maximum a posteriori (SMAP) approach to improve the MAP estimates obtained when the amount of adaptation data is small and the recognition results obtained in unsupervised adaptation experiments showed that SMAP estimation was effective even when only one utterance from a new speaker was used for adaptation.
Abstract: Maximum a posteriori (MAP) estimation has been successfully applied to speaker adaptation in speech recognition systems using hidden Markov models. When the amount of data is sufficiently large, MAP estimation yields recognition performance as good as that obtained using maximum-likelihood (ML) estimation. This paper describes a structural maximum a posteriori (SMAP) approach to improve the MAP estimates obtained when the amount of adaptation data is small. A hierarchical structure in the model parameter space is assumed and the probability density functions for model parameters at one level are used as priors for those of the parameters at adjacent levels. Results of supervised adaptation experiments using nonnative speakers' utterances showed that SMAP estimation reduced error rates by 61% when ten utterances were used for adaptation and that it yielded the same accuracy as MAP and ML estimation when the amount of data was sufficiently large. Furthermore, the recognition results obtained in unsupervised adaptation experiments showed that SMAP estimation was effective even when only one utterance from a new speaker was used for adaptation. An effective way to combine rapid supervised adaptation and on-line unsupervised adaptation was also investigated.

172 citations


Journal ArticleDOI
TL;DR: In this article, the maximum a posteriori (MAP) sequence estimation in non-linear non-Gaussian dynamic models is performed using a particle cloud representation of the filtering distribution which evolves through time using importance sampling and resampling.
Abstract: We develop methods for performing maximum a posteriori (MAP) sequence estimation in non-linear non-Gaussian dynamic models. The methods rely on a particle cloud representation of the filtering distribution which evolves through time using importance sampling and resampling ideas. MAP sequence estimation is then performed using a classical dynamic programming technique applied to the discretised version of the state space. In contrast with standard approaches to the problem which essentially compare only the trajectories generated directly during the filtering stage, our method efficiently computes the optimal trajectory over all combinations of the filtered states. A particular strength of the method is that MAP sequence estimation is performed sequentially in one single forwards pass through the data without the requirement of an additional backward sweep. An application to estimation of a non-linear time series model and to spectral estimation for time-varying autoregressions is described.

151 citations


Journal ArticleDOI
TL;DR: If the sequence generated by the expectation-maximization algorithm converges, then it must converge to the true MAP solution, and an extension of RAMLA for MAP reconstruction is presented.
Abstract: The maximum-likelihood (ML) approach in emission tomography provides images with superior noise characteristics compared to conventional filtered backprojection (FBP) algorithms The expectation-maximization (EM) algorithm is an iterative algorithm for maximizing the Poisson likelihood in emission computed tomography that became very popular for solving the ML problem because of its attractive theoretical and practical properties Recently, (Browne and DePierro, 1996 and Hudson and Larkin, 1991) block sequential versions of the EM algorithm that take advantage of the scanner's geometry have been proposed in order to accelerate its convergence In Hudson and Larkin, 1991, the ordered subsets EM (OS-EM) method was applied to the hit problem and a modification (OS-GP) to the maximum a posteriori (MAP) regularized approach without showing convergence In Browne and DePierro, 1996, we presented a relaxed version of OS-EM (RAMLA) that converges to an ML solution In this paper, we present an extension of RAMLA for MAP reconstruction We show that, if the sequence generated by this method converges, then it must converge to the true MAP solution Experimental evidence of this convergence is also shown To illustrate this behavior we apply the algorithm to positron emission tomography simulated data comparing its performance to OS-GP

151 citations


Journal ArticleDOI
TL;DR: This work derives a reweighted interacting multiple model algorithm that is a recursive implementation of a maximum a posteriori (MAP) state sequence estimator and indicates that this algorithm is a competitive alternative to the popular IMM algorithm and GPB methods.
Abstract: Computing the optimal conditional mean state estimate for a jump Markov linear system requires exponential complexity, and hence, practical filtering algorithms are necessarily suboptimal. In the target tracking literature, suboptimal multiple-model filtering algorithms, such as the interacting multiple model (IMM) method and generalized pseudo-Bayesian (GPB) schemes, are widely used for state estimation of such systems. We derive a reweighted interacting multiple model algorithm. Although the IMM algorithm is an approximation of the conditional mean state estimator, our algorithm is a recursive implementation of a maximum a posteriori (MAP) state sequence estimator. This MAP estimator is an instance of a previous version of the EM algorithm known as the alternating expectation conditional maximization (AECM) algorithm. Computer simulations indicate that the proposed reweighted IMM algorithm is a competitive alternative to the popular IMM algorithm and GPB methods.

149 citations


Journal ArticleDOI
TL;DR: A suitable Markov model with a finite number of states is introduced, designed to approximate both the values and the statistical properties of the correlated flat fading channel phase, which poses a more severe challenge to PSK transmission than amplitude hiding.
Abstract: This paper addresses the design and performance evaluation with respect to capacity of M-PSK turbo-coded systems operating in frequency-flat time-selective Rayleigh fading. The receiver jointly performs channel estimation and turbo decoding, allowing the two processes to benefit from each other. To this end, we introduce a suitable Markov model with a finite number of states, designed to approximate both the values and the statistical properties of the correlated flat fading channel phase, which poses a more severe challenge to PSK transmission than amplitude hiding. Then, the forward-backward algorithm determines both the maximum a posteriori probability (MAP) value for each symbol in the data sequence and the MAP channel phase in each iteration. Simulations show good performance in standard correlated Rayleigh fading channels. A sequence of progressively tighter upper bounds to the capacity of a simplified Markov-phase channel is derived, and performance of a turbo code with joint iterative channel estimation and decoding is demonstrated to approach these capacity bounds.

138 citations


Journal ArticleDOI
TL;DR: This paper presents a general nonlinear multigrid optimization technique suitable for reducing the computational burden in a range of nonquadratic optimization problems and dramatically reduces the required computation and improves the reconstructed image quality.
Abstract: Optical diffusion tomography is a technique for imaging a highly scattering medium using measurements of transmitted modulated light. Reconstruction of the spatial distribution of the optical properties of the medium from such data is a difficult nonlinear inverse problem. Bayesian approaches are effective, but are computationally expensive, especially for three-dimensional (3-D) imaging. This paper presents a general nonlinear multigrid optimization technique suitable for reducing the computational burden in a range of nonquadratic optimization problems. This multigrid method is applied to compute the maximum a posteriori (MAP) estimate of the reconstructed image in the optical diffusion tomography problem. The proposed multigrid approach both dramatically reduces the required computation and improves the reconstructed image quality.

113 citations


Journal ArticleDOI
TL;DR: It turns out that it is not possible to present one decoder structure as being optimal, and there are several tradeoffs, which depend on the specific turbo code, the implementation target, and the selected cost function.
Abstract: Turbo codes are the most recent breakthrough in coding theory. However, the decoder's implementation cost limits their incorporation in commercial systems. Although the decoding algorithm is highly data dominated, no true memory optimization study has been performed yet. We have extensively and systematically investigated different memory optimizations for the maximum a posteriori (MAP) class of decoding algorithms. It turns out that it is not possible to present one decoder structure as being optimal. In fact, there are several tradeoffs, which depend on the specific turbo code, the implementation target (hardware or software), and the selected cost function. We therefore end up with a parametric family of new optimized algorithms out of which the designer can choose. The impact of our optimizations is illustrated by a representative example, which shows a significant decrease in both decoding energy (factor 2.5) and delay (factor 1.7).

Journal ArticleDOI
TL;DR: A new fractionally-spaced maximum a posteriori (MAP) equalizer for data transmission over frequency-selective fading channels and is presented in an iterative (turbo) receiver structure.
Abstract: This paper presents a new fractionally-spaced maximum a posteriori (MAP) equalizer for data transmission over frequency-selective fading channels. The technique is applicable to any standard modulation technique. The MAP equalizer uses an expanded hypothesis trellis for the purpose of joint channel estimation and equalization. The fading channel is estimated by coupling minimum mean square error techniques with the (fixed size) expanded trellis. The new MAP equalizer is also presented in an iterative (turbo) receiver structure. Both uncoded and conventionally coded systems (including iterative processing) are studied. Even on frequency-flat fading channels, the proposed receiver outperforms conventional techniques. Simulations demonstrate the performance of the proposed equalizer.

Proceedings ArticleDOI
01 Jan 2001
TL;DR: This paper presents a novel approach for model-based real-time tracking of highly articulated structures such as humans based on an algorithm which efficiently propagates statistics of probability distributions through a kinematic chain to obtain maximum a posteriori estimates of the motion of the entire structure.
Abstract: This paper presents a novel approach for model-based real-time tracking of highly articulated structures such as humans. This approach is based on an algorithm which efficiently propagates statistics of probability distributions through a kinematic chain to obtain maximum a posteriori estimates of the motion of the entire structure. This algorithm yields the least squares solution in linear time (in the number of components of the model) and can also be applied to non-Gaussian statistics using a simple but powerful trick. The resulting implementation runs in real-time on standard hardware without any pre-processing of the video data and can thus operate on live video. Results from experiments performed using this system are presented and discussed.

Proceedings ArticleDOI
04 Jun 2001
TL;DR: In this paper, the authors developed a model for explaining reported expression levels under an assumption of primarily multiplicative variation, and derived maximum likelihood and maximum a posteriori estimates for the parameters characterizing the multiplicative variations in reported spiked control expression levels.
Abstract: Data from expression arrays must be comparable before it can be analyzed rigorously on a large scale. Accurate normalization improves the comparability of expression data because it seeks to account for sources of variation obscuring the underlying variation of interest. Undesirable variation in reported expression levels originates in the preparation and hybridization of the sample as well as in the manufacture of the array itself, and may differ depending on the array technology being employed. Published research to date has not characterized the degree of variation associated with these sources, and results are often reported without tight statistical bounds on their significance. We analyze the distributions of reported levels of exogenous control species spiked into samples applied to 1280 Affymetrix arrays. We develop a model for explaining reported expression levels under an assumption of primarily multiplicative variation. To compute the scaling factors needed for normalization, we derive maximum likelihood and maximum a posteriori estimates for the parameters characterizing the multiplicative variation in reported spiked control expression levels. We conclude that the optimal scaling factors in this context are weighted geometric means and determine the appropriate weights. The optimal scaling factor estimates so computed can be used for subsequent array normalization.

Proceedings ArticleDOI
01 Jan 2001
TL;DR: An improved algorithm for solving blind sparse linear inverse problems where both the dictionary and the sources are unknown is developed, and it is shown that a learned overcomplete representation can encode the data more efficiently than a complete basis at the same level of accuracy.
Abstract: We develop an improved algorithm for solving blind sparse linear inverse problems where both the dictionary (possibly overcomplete) and the sources are unknown. The algorithm is derived in the Bayesian framework by the maximum a posteriori method, with the choice of prior distribution restricted to the class of concave/Schur-concave functions, which has been shown previously to be a sufficient condition for sparse solutions. This formulation leads to a constrained and regularized minimization problem which can be solved in part using the FOCUSS (focal underdetermined system solver) algorithm for vector selection. We introduce three key improvements in the algorithm: an efficient way of adjusting the regularization parameter; column normalization that restricts the learned dictionary; reinitialization to escape from local optima. Experiments were performed using synthetic data with matrix sizes up to 64/spl times/128; the algorithm solves the blind identification problem, recovering both the dictionary and the sparse sources. The improved algorithm is much more accurate than the original FOCUSS-dictionary learning algorithm when using large matrices. We also test our algorithm on natural images, and show that a learned overcomplete representation can encode the data more efficiently than a complete basis at the same level of accuracy.

Journal ArticleDOI
TL;DR: For ideal positron emission tomography (PET) systems, the MAP reconstruction has a higher SNR for lesion detection than FBP reconstruction due to the modeling of the Poisson noise, and for realistic systems, MAP reconstruction further benefits from accurately modeling the physical photon detection process in PET.
Abstract: The low signal-to-noise ratio (SNR) in emission data has stimulated the development of statistical image reconstruction methods based on the maximum a posteriori (MAP) principle. Experimental examples have shown that statistical methods improve image quality compared to the conventional filtered backprojection (FBP) method. However, these results depend on isolated data sets. Here, the authors study the lesion detectability of MAP reconstruction theoretically, using computer observers. These theoretical results can be applied to different object structures. They show that for a quadratic smoothing prior, the lesion detectability using the prewhitening observer is independent of the smoothing parameter and the neighborhood of the prior, while the nonprewhitening observer exhibits an optimum smoothing point. The authors also compare the results to those of FBP reconstruction. The comparison shows that for ideal positron emission tomography (PET) systems (where data are true line integrals of the tracer distribution) the MAP reconstruction has a higher SNR for lesion detection than FBP reconstruction due to the modeling of the Poisson noise. For realistic systems, MAP reconstruction further benefits from accurately modeling the physical photon detection process in PET.

Journal ArticleDOI
TL;DR: In this paper, the authors evaluated the relative accuracy of the weighted likelihood estimate (WLE) compared to the expected a posteriori (EAP) estimate, maximum likelihood estimation (MLE), EAP and MAP, and found that WLE was more accurate than MLE with a fixed-length CAT.
Abstract: This monte carlo study evaluated the relative accuracy of Warm’s (1989) weighted likelihood estimate (WLE) compared to the maximum likelihood estimate (MLE), expected a posteriori (EAP) estimate, and maximum a posteriori (MAP) estimate. The generalized partial-credit model was used under a variety of computerized adaptive testing (CAT) conditions. The results indicated that WLE was more accurate than MLE with a fixed-length CAT, consistent with previous findings. WLE and MLE had smaller bias and larger standard errors than EAP and MAP. EAP was more accurate than MAP in a variety of CAT conditions. Although root mean squared errors were different among the four estimation methods, no statistically significant mean differences were found. EAP and MAP had advantages over WLE and MLE in terms of test efficiency. These results suggest that the test termination rule has more impact on the accuracy of θ estimation methods than does the item bank size.

Journal ArticleDOI
TL;DR: This letter investigates the sensitivity of the iterative maximum a posteriori probability (MAP) decoder to carrier phase offsets and proposes simple carrier phase recovery algorithms operating within the iteratives MAP decoding iterations, requiring low hardware complexity.
Abstract: In this letter, we investigate the sensitivity of the iterative maximum a posteriori probability (MAP) decoder to carrier phase offsets and propose simple carrier phase recovery algorithms operating within the iterative MAP decoding iterations. The algorithms exploit the information contained in the extrinsic values generated within the iterative MAP decoder to perform carrier recovery, thus requiring low hardware complexity.

Proceedings ArticleDOI
07 Jul 2001
TL;DR: A shape from texture method that constructs a maximum a posteriori estimate of surface coefficients using both the deformation of individual texture elements- as in local methods- and the overall distribution of elements-as in global methods is described.
Abstract: We describe a shape from texture method that constructs a maximum a posteriori estimate of surface coefficients using both the deformation of individual texture elements-as in local methods-and the overall distribution of elements-as in global methods. The method described applies to a much larger family of textures than any previous method, local or global. We demonstrate an analogy with shape from shading, and use this to produce a numerical method. Examples of reconstructions for synthetic images of surfaces are provided, and compared with ground truth. The method is defined for orthographic views, but can be generalised to perspective views simply.

Journal ArticleDOI
TL;DR: The segmentation results show that the proposed JMCMS improves the classification accuracy, and in particular, boundary localization and detection over the methods using a single context at comparable computational complexity.
Abstract: In this paper, a joint multicontext and multiscale (JMCMS) approach to Bayesian image segmentation is proposed. In addition to the multiscale framework, the JMCMS applies multiple context models to jointly use their distinct advantages, and we use a heuristic multistage, problem-solving technique to estimate sequential maximum a posteriori of the JMCMS. The segmentation results on both synthetic mosaics and remotely sensed images show that the proposed JMCMS improves the classification accuracy, and in particular, boundary localization and detection over the methods using a single context at comparable computational complexity.

Journal ArticleDOI
TL;DR: The proposed multiuser receiver is based on the Gibbs sampler, a Markov chain Monte Carlo method for numerically computing the marginal a posteriori probabilities of different users' data symbols, and exploiting the orthogonality property of the STBC and the multicarrier modulation reduces the computational complexity of the receiver.
Abstract: We consider the design of optimal multiuser receivers for space-time block coded (STBC) multicarrier code-division multiple-access (MC-CDMA) systems in unknown frequency-selective fading channels. Under a Bayesian framework, the proposed multiuser receiver is based on the Gibbs sampler, a Markov chain Monte Carlo (MCMC) method for numerically computing the marginal a posteriori probabilities of different users' data symbols. By exploiting the orthogonality property of the STBC and the multicarrier modulation, the computational complexity of the receiver is significantly reduced. Furthermore, being a soft-input soft-output algorithm, the Bayesian Monte Carlo multiuser detector is capable of exchanging the so-called extrinsic information with the maximum a posteriori (MAP) outer channel code decoders of all users, and successively improving the overall receiver performance. Several practical issues, such as testing the convergence of the Gibbs sampler in fading channel applications, resolving the phase ambiguity as well as the antenna ambiguity, and adapting the proposed receiver to multirate MC-CDMA systems, are also discussed. Finally, the performance of the Bayesian Monte Carlo multiuser receiver is demonstrated through computer simulations.

Journal ArticleDOI
TL;DR: In this paper, the authors compare likelihood and Bayesian analyses of finite mixture distributions and express reservations about the latter, in particular the role of prior assumptions in the full Monte Car...
Abstract: This paper compares likelihood and Bayesian analyses of finite mixture distributions, and expresses reservations about the latter. In particular, the role of prior assumptions in the full Monte Car...

Journal ArticleDOI
TL;DR: The experiments show that an MRF Is a valid representation of the activation patterns obtained in functional brain images, and the present technique renders a superior segmentation scheme to the context-free approach and the SPM approach.
Abstract: A contextual segmentation technique to detect brain activation from functional brain images is presented in the Bayesian framework. Unlike earlier similar approaches [Holmes and Ford (1993) and Descombes et al. (1998)], a Markov random field (MRF) is used to represent configurations of activated brain voxels, and likelihoods given by statistical parametric maps (SPM's) are directly used to find the maximum a posteriori (MAP) estimation of segmentation. The iterative segmentation algorithm, which is based on a simulated annealing scheme, is fully data-driven and capable of analyzing experiments involving multiple-input stimuli. Simulation results and comparisons with the simple thresholding and the statistical parametric mapping (SPM) approaches are presented with synthetic images, and functional MR images acquired in memory retrieval and event-related working memory tasks. The experiments show that an MRF Is a valid representation of the activation patterns obtained in functional brain images, and the present technique renders a superior segmentation scheme to the context-free approach and the SPM approach.

Journal ArticleDOI
TL;DR: The maximum a posteriori (MAP) Bayesian iterative algorithm using priors that are gamma distributed, due to Lange, Bahn and Little, is extended to include parameter choices that fall outside the gamma distribution model.
Abstract: The maximum a posteriori (MAP) Bayesian iterative algorithm using priors that are gamma distributed, due to Lange, Bahn and Little, is extended to include parameter choices that fall outside the gamma distribution model. Special cases of the resulting iterative method include the expectation maximization maximum likelihood (EMML) method based on the Poisson model in emission tomography, as well as algorithms obtained by Parra and Barrett and by Huesman et al. that converge to maximum likelihood and maximum conditional likelihood estimates of radionuclide intensities for list-mode emission tomography. The approach taken here is optimization-theoretic and does not rely on the usual expectation maximization (EM) formalism. Block-iterative variants of the algorithms are presented. A self-contained, elementary proof of convergence of the algorithm is included.

Journal ArticleDOI
TL;DR: By providing a prior distribution for the model parameters and the transformation parameters, it is possible to jointly estimate these two sets of parameters using maximum a posteriori estimation (MAP) using a single estimation criterion based on Bayesian statistics.
Abstract: Model adaptation techniques are an efficient way to reduce the mismatch that typically occurs between the training and test condition of any speech recognizer. Adaptation techniques can usually be divided into two families of approaches. On one hand, direct model adaptation attempts to directly reestimate the model parameters, for example using MAP adaptation. Since direct adaptation only reestimates model parameters of the corresponding units appearing in the adaptation data, a large amount of such data is needed to observe any significant improvement in performance. However, nice asymptotic properties are usually observed, meaning that the performance improves as the amount of adaptation data increases. On the other hand, indirect model adaptation applies a general transformation on some clusters of model parameters. Because each individual model is transformed, the approach is quite effective when a small amount of adaptation data is available. However, as the amount of adaptation data increases, the performance improvement quickly saturates. We propose to jointly estimate model parameters and transformation parameters using a single estimation criterion based on Bayesian statistics. We show that by providing a prior distribution for the model parameters and the transformation parameters, it is possible to jointly estimate these two sets of parameters using maximum a posteriori estimation (MAP). Experimental evaluation on nonnative speaker and channel adaptation illustrates the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: The new algorithm effectively removed the bias in k21 estimates due to inconsistent projections for sampling schedules as slow as 60 s per timeframe, but no improvement in wash-out parameter estimates was observed in this work.
Abstract: A 4D ordered-subsets maximum a posteriori (OSMAP) algorithm for dynamic SPECT is described which uses a temporal prior that constrains each voxel's behaviour in time to conform to a compartmental model. No a priori limitations on kinetic parameters are applied; rather, the parameter estimates evolve as the algorithm iterates to a solution. The estimated parameters and time-activity curves are used within the reconstruction algorithm to model changes in the activity distribution as the camera rotates, avoiding artefacts due to inconsistencies of data between projection views. This potentially allows for fewer, longer-duration scans to be used and may have implications for noise reduction. The algorithm was evaluated qualitatively using dynamic 99mTc-teboroxime SPECT scans in two patients, and quantitatively using a series of simulated phantom experiments. The OSMAP algorithm resulted in images with better myocardial uniformity and definition, gave time-activity curves with reduced noise variations, and provided wash-in parameter estimates with better accuracy and lower statistical uncertainty than those obtained from conventional ordered-subsets expectation-maximization (OSEM) processing followed by compartmental modelling. The new algorithm effectively removed the bias in k21 estimates due to inconsistent projections for sampling schedules as slow as 60 s per timeframe, but no improvement in wash-out parameter estimates was observed in this work. The proposed dynamic OSMAP algorithm provides a flexible framework which may benefit a variety of dynamic tomographic imaging applications.

Journal ArticleDOI
TL;DR: This paper addresses how to use the Neumann boundary condition on the image, and the preconditionsed conjugate gradient method with cosine transform preconditioners to solve linear systems arising from the high-resolution image reconstruction with multisensors.
Abstract: In many applications, it is required to reconstruct a high-resolution image from multiple, undersampled and shifted noisy images. Using the regularization techniques such as the classical Tikhonov regularization and maximum a posteriori (MAP) procedure, a high-resolution image reconstruction algorithm is developed. Because of the blurring process, the boundary values of the low-resolution image are not completely determined by the original image inside the scene. This paper addresses how to use (i) the Neumann boundary condition on the image, i.e., we assume that the scene immediately outside is a reflection of the original scene at the boundary, and (ii) the preconditioned conjugate gradient method with cosine transform preconditioners to solve linear systems arising from the high-resolution image reconstruction with multisensors. The usefulness of the algorithm is demonstrated through simulated examples.

Journal ArticleDOI
TL;DR: The design of a blind receiver for coded orthogonal frequency-division multiplexing communication systems in the presence of frequency offset and frequency-selective fading is investigated and Bayesian blind turbo receiver can achieve good performance and is robust against modeling mismatch.
Abstract: The design of a blind receiver for coded orthogonal frequency-division multiplexing communication systems in the presence of frequency offset and frequency-selective fading is investigated. The proposed blind receiver iterates between a Bayesian demodulation stage and a maximum a posteriori channel decoding stage. The extrinsic a posteriori probabilities of data symbols are iteratively exchanged between these two stages to achieve successively improved performance. The Bayesian demodulator computes the a posteriori data symbol probabilities, based on the received signals (without knowing or explicitly estimating the frequency offset and the fading channel states), by using Markov chain Monte Carlo (MCMC) techniques. In particular, two MCMC methods-the Metropolis-Hastings algorithm and the Gibbs sampler-are studied for this purpose. Computer simulation results show that the proposed Bayesian blind turbo receiver can achieve good performance and is robust against modeling mismatch.

Journal ArticleDOI
TL;DR: A Bayesian approach where the functional properties of the underlying signal in noise are directly modeled using Besov norm priors on its wavelet decomposition coefficients, and it is shown that nonstandard soft thresholding estimators are in particular obtained in possibly non-Gaussian noise situations.

Journal ArticleDOI
TL;DR: In this article, it was shown that a wide class of estimators, including all Bayes estimators with respect to orthogonally invariant priors, dominate the maximum likelihood estimator when the loss is squared error.
Abstract: We consider the problem of estimating the mean of a $p$-variate normal distribution with identity covariance matrix when the mean lies in a ball of radius $m$. It follows from general theory that dominating estimators of the maximum likelihood estimator always exist when the loss is squared error. We provide and describe explicit classes of improvements for all problems $(m, p)$. We show that,for small enough $m$, a wide class of estimators, including all Bayes estimators with respect to orthogonally invariant priors, dominate the maximum likelihood estimator. When $m$ is not so small, we establish general sufficient conditions for dominance over the maximum likelihood estimator. These include, when $m \le \sqrt{p}$, the Bayes estimator with respect to a uniform prior on the boundary of the parameter space. We also study the resulting Bayes estimators for orthogonally invariant priors and obtain conditions of dominance involving the choice of the prior. Finally, these Bayesian dominance results are further discussed and illustrated with examples, which include (1) the Bayes estimator for a uniform prior on the whole parameter space and (2) a new Bayes estimator derived from an exponential family of priors.