scispace - formally typeset
Search or ask a question

Showing papers on "Maximum a posteriori estimation published in 2003"


Journal ArticleDOI
TL;DR: The purpose of this paper is to provide a good conceptual explanation of the method with illustrative examples so the reader can have a grasp of some of the basic principles of MLE.

1,542 citations


Journal ArticleDOI
TL;DR: This paper formulate the stereo matching problem as a Markov network and solve it using Bayesian belief propagation to obtain the maximum a posteriori (MAP) estimation in the Markovnetwork.
Abstract: In this paper, we formulate the stereo matching problem as a Markov network and solve it using Bayesian belief propagation. The stereo Markov network consists of three coupled Markov random fields that model the following: a smooth field for depth/disparity, a line process for depth discontinuity, and a binary process for occlusion. After eliminating the line process and the binary process by introducing two robust functions, we apply the belief propagation algorithm to obtain the maximum a posteriori (MAP) estimation in the Markov network. Other low-level visual cues (e.g., image segmentation) can also be easily incorporated in our stereo model to obtain better stereo results. Experiments demonstrate that our methods are comparable to the state-of-the-art stereo algorithms for many test cases.

1,272 citations


Journal ArticleDOI
TL;DR: A novel Bayesian-based algorithm within the framework of wavelet analysis is proposed, which reduces speckle in SAR images while preserving the structural features and textural information of the scene.
Abstract: Synthetic aperture radar (SAR) images are inherently affected by multiplicative speckle noise, which is due to the coherent nature of the scattering phenomenon. This paper proposes a novel Bayesian-based algorithm within the framework of wavelet analysis, which reduces speckle in SAR images while preserving the structural features and textural information of the scene. First, we show that the subband decompositions of logarithmically transformed SAR images are accurately modeled by alpha-stable distributions, a family of heavy-tailed densities. Consequently, we exploit this a priori information by designing a maximum a posteriori (MAP) estimator. We use the alpha-stable model to develop a blind speckle-suppression processor that performs a nonlinear operation on the data and we relate this nonlinearity to the degree of non-Gaussianity of the data. Finally, we compare our proposed method to current state-of-the-art soft thresholding techniques applied on real SAR imagery and we quantify the achieved performance improvement.

435 citations


Journal ArticleDOI
TL;DR: A Bayesian framework is used to account for noise in the data and a maximum a posteriori (MAP) estimation procedure leads to an iterative procedure which is a regularized version of the focal underdetermined system solver (FOCUSS) algorithm that is superior to the OMP in noisy environments.
Abstract: We develop robust methods for subset selection based on the minimization of diversity measures. A Bayesian framework is used to account for noise in the data and a maximum a posteriori (MAP) estimation procedure leads to an iterative procedure which is a regularized version of the focal underdetermined system solver (FOCUSS) algorithm. The convergence of the regularized FOCUSS algorithm is established and it is shown that the stable fixed points of the algorithm are sparse. We investigate three different criteria for choosing the regularization parameter: quality of fit; sparsity criterion; L-curve. The L-curve method, as applied to the problem of subset selection, is found not to be robust, and we propose a novel modified L-curve procedure that solves this problem. Each of the regularized FOCUSS algorithms is evaluated through simulation of a detection problem, and the results are compared with those obtained using a sequential forward selection algorithm termed orthogonal matching pursuit (OMP). In each case, the regularized FOCUSS algorithm is shown to be superior to the OMP in noisy environments.

325 citations


Journal ArticleDOI
TL;DR: A novel technique to simultaneously estimate the depth map and the focused image of a scene, both at a super-resolution, from its defocused observations as well as to estimate the true high resolution focused image.
Abstract: This paper presents a novel technique to simultaneously estimate the depth map and the focused image of a scene, both at a super-resolution, from its defocused observations. Super-resolution refers to the generation of high spatial resolution images from a sequence of low resolution images. Hitherto, the super-resolution technique has been restricted mostly to the intensity domain. In this paper, we extend the scope of super-resolution imaging to acquire depth estimates at high spatial resolution simultaneously. Given a sequence of low resolution, blurred, and noisy observations of a static scene, the problem is to generate a dense depth map at a resolution higher than one that can be generated from the observations as well as to estimate the true high resolution focused image. Both the depth and the image are modeled as separate Markov random fields (MRF) and a maximum a posteriori estimation method is used to recover the high resolution fields. Since there is no relative motion between the scene and the camera, as is the case with most of the super-resolution and structure recovery techniques, we do away with the correspondence problem.

159 citations


Journal ArticleDOI
TL;DR: This paper performed positron emission tomography and single photon emission computed tomography simulations to compare the performance of the new method to that of the postsmoothed maximum-likelihood (ML) approach, using the impulse response of the former method as the post smoothing filter for the latter.
Abstract: Regularization is desirable for image reconstruction in emission tomography. A powerful regularization method is the penalized-likelihood (PL) reconstruction algorithm (or equivalently, maximum a posteriori reconstruction), where the sum of the likelihood and a noise suppressing penalty term (or Bayesian prior) is optimized. Usually, this approach yields position-dependent resolution and bias. However, for some applications in emission tomography, a shift-invariant point spread function would be advantageous. Recently, a new method has been proposed, in which the penalty term is tuned in every pixel to impose a uniform local impulse response. In this paper, an alternative way to tune the penalty term is presented. We performed positron emission tomography and single photon emission computed tomography simulations to compare the performance of the new method to that of the postsmoothed maximum-likelihood (ML) approach, using the impulse response of the former method as the postsmoothing filter for the latter. For this experiment, the noise properties of the PL algorithm were not superior to those of postsmoothed ML reconstruction.

121 citations


Journal ArticleDOI
TL;DR: The experimental results show that computational reduction by a factor of 17 can be achieved with 5% relative reduction in equal error rate (EER) compared with the baseline, and the SGMM-SBM shows some advantages over the recently proposed hash GMM, including higher speed and better verification performance.
Abstract: We present an integrated system with structural Gaussian mixture models (SGMMs) and a neural network for purposes of achieving both computational efficiency and high accuracy in text-independent speaker verification. A structural background model (SBM) is constructed first by hierarchically clustering all Gaussian mixture components in a universal background model (UBM). In this way the acoustic space is partitioned into multiple regions in different levels of resolution. For each target speaker, a SGMM can be generated through multilevel maximum a posteriori (MAP) adaptation from the SBM. During test, only a small subset of Gaussian mixture components are scored for each feature vector in order to reduce the computational cost significantly. Furthermore, the scores obtained in different layers of the tree-structured models are combined via a neural network for final decision. Different configurations are compared in the experiments conducted on the telephony speech data used in the NIST speaker verification evaluation. The experimental results show that computational reduction by a factor of 17 can be achieved with 5% relative reduction in equal error rate (EER) compared with the baseline. The SGMM-SBM also shows some advantages over the recently proposed hash GMM, including higher speed and better verification performance.

117 citations


Journal ArticleDOI
TL;DR: It is shown that the critical path of the algorithm can be reduced if the add-MAX* operation is reordered into an offset-add-compare-select operation by adjusting the location of registers.
Abstract: This paper presents several techniques for the very large-scale integration (VLSI) implementation of the maximum a posteriori (MAP) algorithm. In general, knowledge about the implementation of the Viterbi (1967) algorithm can be applied to the MAP algorithm. Bounds are derived for the dynamic range of the state metrics which enable the designer to optimize the word length. The computational kernel of the algorithm is the add-MAX* operation, which is the add-compare-select operation of the Viterbi algorithm with an added offset. We show that the critical path of the algorithm can be reduced if the add-MAX* operation is reordered into an offset-add-compare-select operation by adjusting the location of registers. A general scheduling for the MAP algorithm is presented which gives the tradeoffs between computational complexity, latency, and memory size. Some of these architectures eliminate the need for RAM blocks with unusual form factors or can replace the RAM with registers. These architectures are suited to VLSI implementation of turbo decoders.

116 citations


Journal ArticleDOI
TL;DR: This work formulate two well known music recognition problems, namely tempo tracking and automatic transcription as filtering and maximum a posteriori (MAP) state estimation tasks, and introduces Monte Carlo methods for integration and optimization.
Abstract: We present a probabilistic generative model for timing deviations in expressive music performance. The structure of the proposed model is equivalent to a switching state space model. The switch variables correspond to discrete note locations as in a musical score. The continuous hidden variables denote the tempo. We formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. Exact computation of posterior features such as the MAP state is intractable in this model class, so we introduce Monte Carlo methods for integration and optimization. We compare Markov Chain Monte Carlo (MCMC) methods (such as Gibbs sampling, simulated annealing and iterative improvement) and sequential Monte Carlo methods (particle filters). Our simulation results suggest better results with sequential methods. The methods can be applied in both online and batch scenarios such as tempo tracking and transcription and are thus potentially useful in a number of music applications such as adaptive automatic accompaniment, score typesetting and music information retrieval.

114 citations


Book ChapterDOI
TL;DR: Two classifier approaches, namely classifiers based on Multi Layer Perceptrons (MLPs) and Gaussian Mixture Models (GMMs), are compared for use in a face verification system and it is shown that for low resolution faces the MLP approach has slightly lower error rates than theGMM approach; however, the GMM approach easily outperforms theMLP approach for high resolution faces and is significantly more robust to imperfectly located faces.
Abstract: We compare two classifier approaches, namely classifiers based on Multi Layer Perceptrons (MLPs) and Gaussian Mixture Models (GMMs), for use in a face verification system. The comparison is carried out in terms of performance, robustness and practicability. Apart from structural differences, the two approaches use different training criteria; the MLP approach uses a discriminative criterion, while the GMM approach uses a combination of Maximum Likelihood (ML) and Maximum a Posteriori (MAP) criteria. Experiments on the XM2VTS database show that for low resolution faces the MLP approach has slightly lower error rates than the GMM approach; however, the GMM approach easily outperforms the MLP approach for high resolution faces and is significantly more robust to imperfectly located faces. The experiments also show that the computational requirements of the GMM approach can be significantly smaller than the MLP approach at a cost of small loss of performance.

108 citations


Journal ArticleDOI
TL;DR: It is shown that more accurate and robust results may be obtained through seeking a joint solution to these linked processes through applying Markov random fields in the solution of a maximum a posteriori model of segmentation and registration.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of estimating the mean of an infinite-break dimensional normal distribution from the Bayesian perspective and derive the convergence rate of the posterior distribution for a prior that is the infinite product of certain normal distributions.
Abstract: We consider the problem of estimating the mean of an infinite-break dimensional normal distribution from the Bayesian perspective. Under the assumption that the unknown true mean satisfies a "smoothness condition," we first derive the convergence rate of the posterior distribution for a prior that is the infinite product of certain normal distributions and compare with the minimax rate of convergence for point estimators. Although the posterior distribution can achieve the optimal rate of convergence, the required prior depends on a "smoothness parameter" q. When this parameter q is unknown, besides the estimation of the mean, we encounter the problem of selecting a model. In a Bayesian approach, this uncertainty in the model selection can be handled simply by further putting a prior on the index of the model. We show that if q takes values only in a discrete set, the resulting hierarchical prior leads to the same convergence rate of the posterior as if we had a single model. A slightly weaker result is presented when q is unrestricted. An adaptive point estimator based on the posterior distribution is also constructed. Primary Subjects: 62G20. Secondary Subjects: 62C10, 62G05. Keywords: Adaptive Bayes procedure; convergence rate; minimax risk; posterior distribution; model selection.

Journal ArticleDOI
TL;DR: This work addresses the problem of finding the bounding contour of an object in an image when some prior knowledge about the object is available, and introduces a framework for combining prior Probabilistic knowledge of the appearance of the object with probabilistic models for contour grouping.
Abstract: Conventional approaches to perceptual grouping assume little specific knowledge about the object(s) of interest. However, there are many applications in which such knowledge is available and useful. Here, we address the problem of finding the bounding contour of an object in an image when some prior knowledge about the object is available. We introduce a framework for combining prior probabilistic knowledge of the appearance of the object with probabilistic models for contour grouping. A constructive search technique is used to compute candidate closed object boundaries, which are then evaluated by combining figure, ground, and prior probabilities to compute the maximum a posteriori estimate. A significant advantage of our formulation is that it rigorously combines probabilistic local cues with important global constraints such as simplicity (no self-intersections), closure, completeness, and nontrivial scale priors. We apply this approach to the problem of computing exact lake boundaries from satellite imagery, given approximate prior knowledge from an existing digital database. We quantitatively evaluate the performance of our algorithm and find that it exceeds the performance of human mapping experts and a competing active contour approach, even with relatively weak prior knowledge. While the priors may be task-specific, the approach is general, as we demonstrate by applying it to a completely different problem: the computation of human skin boundaries in natural imagery.

Proceedings ArticleDOI
18 Jun 2003
TL;DR: It is shown how the use of a class-specific prior in a visual hull reconstruction can reduce the effect of segmentation errors from the silhouette extraction process.
Abstract: We present a Bayesian approach to image-based visual hull reconstruction. The 3D (three-dimensional) shape of an object of a known class is represented by sets of silhouette views simultaneously observed from multiple cameras. We show how the use of a class-specific prior in a visual hull reconstruction can reduce the effect of segmentation errors from the silhouette extraction process. In our representation, 3D information is implicit in the joint observations of multiple contours from known viewpoints. We model the prior density using a probabilistic principal components analysis-based technique and estimate a maximum a posteriori reconstruction of multi-view contours. The proposed method is applied to a dataset of pedestrian images, and improvements in the approximate 3D models under various noise conditions are shown.

Journal ArticleDOI
TL;DR: A novel marginal estimator of the sole phase by maximum a posteriori is proposed, obtained by integrating the observed object out of the problem and it is shown that the marginal method is also appropriate for the restoration of the object.
Abstract: We propose a novel method called marginal estimator for estimating the aberrations and the object from phase-diversity data. The conventional estimator found in the literature concerning the technique first proposed by Gonsalves has its basis in a joint estimation of the aberrated phase and the observed object. By means of simulations, we study the behavior of the conventional estimator, which is interpretable as a joint maximum a posteriori approach, and we show in particular that it has undesirable asymptotic properties and does not permit an optimal joint estimation of the object and the aberrated phase. We propose a novel marginal estimator of the sole phase by maximum a posteriori. It is obtained by integrating the observed object out of the problem. This reduces drastically the number of unknowns, allows the unsupervised estimation of the regularization parameters, and provides better asymptotic properties. We show that the marginal method is also appropriate for the restoration of the object. This estimator is implemented and its properties are validated by simulations. The performance of the joint method and the marginal one is compared on both simulated and experimental data in the case of Earth observation. For the studied object, the comparison of the quality of the phase restoration shows that the performance of the marginal approach is better under high-noise-level conditions.

Journal ArticleDOI
TL;DR: It is shown that, by subtracting out the estimated single-trial components from each of the single- trial recordings, one can estimate the ongoing activity, thus providing additional information concerning task-related brain dynamics.
Abstract: A Bayesian inference framework for estimating the parameters of single-trial, multicomponent, event-related potentials is presented. Single-trial recordings are modeled as the linear combination of ongoing activity and multicomponent waveforms that are relatively phase-locked to certain sensory or motor events. Each component is assumed to have a trial-invariant waveform with trial-dependent amplitude scaling factors and latency shifts. A Maximum a Posteriori solution of this model is implemented via an iterative algorithm from which the component's waveform, single-trial amplitude scaling factors and latency shifts are estimated. Multiple components can be derived from a single-channel recording based on their differential variability, an aspect in contrast with other component analysis techniques (e.g., independent component analysis) where the number of components estimated is equal to or smaller than the number of recording channels. Furthermore, we show that, by subtracting out the estimated single-trial components from each of the single-trial recordings, one can estimate the ongoing activity, thus providing additional information concerning task-related brain dynamics. We test this approach, which we name differentially variable component analysis (dVCA), on simulated data and apply it to an experimental dataset consisting of intracortically recorded local field potentials from monkeys performing a visuomotor pattern discrimination task.

Proceedings Article
13 Oct 2003
TL;DR: An iterative method for reconstructing a 3D polygonal mesh and color texture map from multiple views of an object is presented, and shape adjustments can be constrained such that the recovered model's silhouette matches those of the input images.
Abstract: An iterative method for reconstructing a 3D polygonalmesh and color texture map from multiple views of an objectis presented. In each iteration, the method first estimates atexture map given the current shape estimate. The texturemap and its associated residual error image are obtainedvia maximum a posteriori estimation and reprojection of themultiple views into texture space. Next, the surface shape isadjusted to minimize residual error in texture space. Thesurface is deformed towards a photometrically-consistentsolution via a series of 1D epipolar searches at randomlyselected surface points. The texture space formulation hasimproved computational complexity over standard image-basederror aproaches, and allows computation of the reprojectionerror and uncertainty for any point on the surface.Moreover, shape adjustments can be constrained suchthat the recovered model's silhouette matches those of theinput images. Experiments with real world imagery demonstratethe validity of the approach.

Journal ArticleDOI
TL;DR: Experimental results in speaker-independent, continuous speech recognition over Italian digit-strings validate the novel hybrid framework, allowing for improved recognition performance over HMMs with mixtures of Gaussian components, as well as over Bourlard and Morgan's paradigm.
Abstract: Acoustic modeling in state-of-the-art speech recognition systems usually relies on hidden Markov models (HMMs) with Gaussian emission densities. HMMs suffer from intrinsic limitations, mainly due to their arbitrary parametric assumption. Artificial neural networks (ANNs) appear to be a promising alternative in this respect, but they historically failed as a general solution to the acoustic modeling problem. This paper introduces algorithms based on a gradient-ascent technique for global training of a hybrid ANN/HMM system, in which the ANN is trained for estimating the emission probabilities of the states of the HMM. The approach is related to the major hybrid systems proposed by Bourlard and Morgan and by Bengio, with the aim of combining their benefits within a unified framework and to overcome their limitations. Several viable solutions to the "divergence problem"-that may arise when training is accomplished over the maximum-likelihood (ML) criterion-are proposed. Experimental results in speaker-independent, continuous speech recognition over Italian digit-strings validate the novel hybrid framework, allowing for improved recognition performance over HMMs with mixtures of Gaussian components, as well as over Bourlard and Morgan's paradigm. In particular, it is shown that the maximum a posteriori (MAP) version of the algorithm yields a 46.34% relative word error rate reduction with respect to standard HMMs.

01 Jan 2003
TL;DR: In this paper, the authors describe the steps involved in registering images of different subjects into roughly the same co-ordinate system, where the coordinate system is defined by a template image (or series of images).
Abstract: This chapter describes the steps involved in registering images of different subjects into roughly the same co-ordinate system, where the co-ordinate system is defined by a template image (or series of images). The method only uses up to a few hundred parameters, so can only model global brain shape. It works by estimating the optimum coefficients for a set of bases, by minimizing the sum of squared differences between the template and source image, while simultaneously maximizing the smoothness of the transformation using a maximum a posteriori (MAP) approach.

Journal ArticleDOI
TL;DR: This work proposes an entirely new class of convex priors that depends on f and also on m, an auxiliary field in register with f, and specialize this class to the median prior (MP), an empirical method that has been applied to emission and transmission tomography.
Abstract: In a Bayesian tomographic maximum a posteriori (MAP) reconstruction, an estimate of the object f is computed by iteratively minimizing an objective function that typically comprises the sum of a log-likelihood (data consistency) term and prior (or penalty) term. The prior can be used to stabilize the solution and to also impose spatial properties on the solution. One such property, preservation of edges and locally monotonic regions, is captured by the well-known median root prior (MRP), an empirical method that has been applied to emission and transmission tomography. We propose an entirely new class of convex priors that depends on f and also on m, an auxiliary field in register with f. We specialize this class to our median prior (MP). The approximate action of the median prior is to draw, at each iteration, an object voxel toward its own local median. This action is similar to that of MRP and results in solutions that impose the same sorts of object properties as does MRP. Our MAP method is not empirical, since the problem is stated completely as the minimization of a joint (on f and m) objective. We propose an alternating algorithm to compute the joint MAP solution and apply this to emission tomography, showing that the reconstructions are qualitatively similar to those obtained using MRP.

Journal ArticleDOI
TL;DR: A maximum a posteriori probability (MAP) estimation method for estimating the mixing proportions for Lambertian and specular reflectance, and also, for recovering local surface normals, which reveals not only that the method accurately estimates the proportion of specular reflection, but that it also results in good surface normal reconstruction in the proximity of Speular highlights.

Proceedings Article
16 Sep 2003
TL;DR: MMI-MAP is shown to be effective for generating gender-dependent models for Broadcast News transcription, and MPE-MAP, a method for incorporating prior information into the discriminative training framework, is described.
Abstract: This paper investigates the use of discriminative schemes based on themaximum mutual information (MMI) and minimum phone error (MPE) objective functions for both task and gender adaptation. A method for incorporating prior information into the discriminative training framework is described. If an appropriate form of prior distribution is used, then this may be implemented by simply altering the values of the counts used for parameter estimation. The prior distribution can be based around maximum likelihood parameter estimates, giving a technique known as I-smoothing, or for adaptation it can be based around a MAP estimate of the ML parameters, leading to MMI-MAP, or MPE-MAP.MMI-MAP isshown tobe effectivefor taskadaptation, where data from one task (Voicemail) is used to adapt a HMM set trained on another task (Switchboard). MPE-MAP is shown to be effective for generating gender-dependent models for Broadcast News transcription.

Proceedings ArticleDOI
06 Apr 2003
TL;DR: MMI-MAP results in a 2.1% absolute reduction in word error rate relative to standard ML-MAP with 30 hours of Voicemail task adaptation data starting from a MMI-trained Switchboard system.
Abstract: In this paper we show how a discriminative objective function such as Maximum Mutual Information (MMI) can be combined with a prior distribution over the HMM parameters to give a discriminative Maximum A Posteriori (MAP) estimate for HMM training. The prior distribution can be based around the Maximum Likelihood (ML) parameter estimates, leading to a technique previously referred to as I-smoothing; or for adaptation it can be based around a MAP estimate of the ML parameters, leading to what we call MMI-MAP. This latter approach is shown to be effective for task adaptation, where data from one task (Voicemail) is used to adapt a HMM set trained on another task (Switchboard). It is shown that MMI-MAP results in a 2.1% absolute reduction in word error rate relative to standard ML-MAP with 30 hours of Voicemail task adaptation data starting from a MMI-trained Switchboard system.

Journal ArticleDOI
TL;DR: A methodology to discover cluster structure in home videos, which uses video shots as the unit of organization, and is based on two concepts: the development of statistical models of visual similarity, duration, and temporal adjacency of consumer video segments and the reformulation of hierarchical clustering as a sequential binary Bayesian classification process.
Abstract: Accessing, organizing, and manipulating home videos present technical challenges due to their unrestricted content and lack of storyline. We present a methodology to discover cluster structure in home videos, which uses video shots as the unit of organization, and is based on two concepts: (1) the development of statistical models of visual similarity, duration, and temporal adjacency of consumer video segments and (2) the reformulation of hierarchical clustering as a sequential binary Bayesian classification process. A Bayesian formulation allows for the incorporation of prior knowledge of the structure of home video and offers the advantages of a principled methodology. Gaussian mixture models are used to represent the class-conditional distributions of intra- and inter-segment visual and temporal features. The models are then used in the probabilistic clustering algorithm, where the merging order is a variation of highest confidence first, and the merging criterion is maximum a posteriori. The algorithm does not need any ad-hoc parameter determination. We present extensive results on a 10-h home-video database with ground truth which thoroughly validate the performance of our methodology with respect to cluster detection, individual shot-cluster labeling, and the effect of prior selection.

Proceedings ArticleDOI
10 Mar 2003
TL;DR: Mr Bayes (a Bayesian inference approach) consistently outperforms the other methods in terms of accuracy and running time and is found to be the most accurate and fastest of the likelihood-based approaches.
Abstract: We analyze the performance of likelihood-based approaches used to reconstruct phylogenetic trees. Unlike other techniques such as Neighbor-joining (NJ) and Maximum Parsimony (MP), relatively little is known regarding the behavior of algorithms founded on the principle of likelihood. We study the accuracy, speed, and likelihood scores of four representative likelihood-based methods (fastDNAml, Mr Bayes, PAUP*-ML, and TREE-PUZZLE) that use either Maximum Likelihood (ML) or Bayesian inference to find the optimal tree. NJ is also studied to provide a baseline comparison. Our simulation study is based on random birth-death trees, which are deviated from ultrametricity, and uses the Kimura 2-parameter +Gamma model of sequence evolution. We find that Mr Bayes (a Bayesian inference approach) consistently outperforms the other methods in terms of accuracy and running time.

Journal ArticleDOI
TL;DR: The minimum mean square error and maximum a posteriori estimators of the changepoint positions are studied and a hyperparameter estimation procedure is proposed that alleviates the requirement of knowing the values of the hyperparameters.

Journal ArticleDOI
TL;DR: The proposed dual MMRF (DMMRF) modeling method offers significant improvement on both objective peak signal-to-noise ratio (PSNR) measurement and subjective visual quality of restored video sequence.
Abstract: A novel error concealment algorithm based on a stochastic modeling approach is proposed as a post-processing tool at the decoder side for recovering the lost information incurred during the transmission of encoded digital video bitstreams. In our proposed scheme, both the spatial and the temporal contextual features in video signals are separately modeled using the multiscale Markov random field (MMRF). The lost information is then estimated using maximum a posteriori (MAP) probabilistic approach based on the spatial and temporal MMRF models; hence, a unified MMRF-MAP framework. To preserve the high frequency information (in particular, the edges) of the damaged video frames through iterative optimization, a new adaptive potential function is also introduced in this paper. Comparing to the existing MRF-based schemes and other traditional concealment algorithms, the proposed dual MMRF (DMMRF) modeling method offers significant improvement on both objective peak signal-to-noise ratio (PSNR) measurement and subjective visual quality of restored video sequence.

Journal ArticleDOI
TL;DR: It is shown that using a MAP decoding algorithm based on the Gaussian noise assumptions, however, may significantly degrade the TC decoder performance in an optical-fiber channel with non-Gaussian ASE noise.
Abstract: In this paper, we study the effects of different ASE noise models on the performance of turbo code (TC) decoders. A soft-decoding algorithm, the Bahl, Cocke, Jelinek, and Raviv (BCJR) decoding algorithm, is generally used in TC decoders. The BCJR algorithm is a maximum a posteriori probability (MAP) algorithm, and is very sensitive to noise statistics. The Gaussian approximation of ASE noise is widely used in the study of optical-fiber communication systems, and there exist standard TCs for additive white Gaussian noise (AWGN) channels. We show that using a MAP decoding algorithm based on the Gaussian noise assumptions, however, may significantly degrade the TC decoder performance in an optical-fiber channel with non-Gaussian ASE noise. To take full advantage of TC, accurate noise statistics in optical-fiber transmissions should be used in the MAP decoding algorithm.

Journal ArticleDOI
TL;DR: In this article, a systematic approach to model-parameter identification using maximum a posteriori estimation is employed combining the maximum likelihood parameter estimates and their uncertainties in conjunction with after-anneal boron SIMS profiles to obtain accurate TED energetics.
Abstract: Transient enhanced diffusion (TED) of boron limits the formation of ultrashallow junctions needed in next-generation microelectronic devices. A comprehensive TED model needs many parameters governing the physical and chemical processes. Prior estimates of the most likely values for the parameters as well as their accuracies are determined from maximum likelihood estimation applied to estimates from focused individual experiments and density functional theory calculations. Here a systematic approach to model-parameter identification using maximum a posteriori estimation is employed combining the maximum likelihood parameter estimates and their uncertainties in conjunction with after-anneal boron SIMS profiles to obtain accurate TED energetics. Guidance on future experimental and ab initio efforts are given based on the agreement (and disagreement) between the prior and posterior distributions.

Journal ArticleDOI
TL;DR: Several recursive forms of the classical Baum-Welch algorithm and its Bayesian counterpart (often referred to a Bayesian EM algorithm) are derived in an unified way and lead to computationally attractive algorithms which avoid matrix inversions while using sequential processing over the time and trellis branch indices.
Abstract: The problems of adaptive maximum a posteriori (MAP) symbol detection for uncoded transmission and of adaptive soft-input soft-output (SISO) demodulation for coded transmission of data symbols over time-varying frequency-selective channels are explored within the framework of the expectation-maximization (EM) algorithm. In particular, several recursive forms of the classical Baum-Welch (BW) algorithm and its Bayesian counterpart (often referred to a Bayesian EM algorithm) are derived in an unified way. In contrast to earlier developments of the BW and BEM algorithms, these formulations lead to computationally attractive algorithms which avoid matrix inversions while using sequential processing over the time and trellis branch indices. Moreover, it is shown how these recursive versions of the BW and BEM algorithms can be integrated with the well-known forward-backward processing SISO algorithms resulting in adaptive SISOs with embedded soft decision directed (SDD) channel estimators. An application of the proposed algorithms to iterative "turbo-processing" receivers illustrates how these SDD channel estimators can efficiently exploit the extrinsic information obtained as feedback from the SISO decoder in order to enhance their estimation accuracy.