scispace - formally typeset
Search or ask a question

Showing papers on "Maximum a posteriori estimation published in 2005"


Proceedings ArticleDOI
17 Oct 2005
TL;DR: The human detection problem is formulated as maximum a posteriori (MAP) estimation, and edgelet features are introduced, which are a new type of silhouette oriented features that are learned by a boosting method.
Abstract: This paper proposes a method for human detection in crowded scene from static images. An individual human is modeled as an assembly of natural body parts. We introduce edgelet features, which are a new type of silhouette oriented features. Part detectors, based on these features, are learned by a boosting method. Responses of part detectors are combined to form a joint likelihood model that includes cases of multiple, possibly inter-occluded humans. The human detection problem is formulated as maximum a posteriori (MAP) estimation. We show results on a commonly used previous dataset as well as new data sets that could not be processed by earlier methods.

903 citations


Journal ArticleDOI
TL;DR: This work develops and analyze methods for computing provably optimal maximum a posteriori probability (MAP) configurations for a subclass of Markov random fields defined on graphs with cycles and establishes a connection between a certain LP relaxation of the mode-finding problem and a reweighted form of the max-product (min-sum) message-passing algorithm.
Abstract: We develop and analyze methods for computing provably optimal maximum a posteriori probability (MAP) configurations for a subclass of Markov random fields defined on graphs with cycles. By decomposing the original distribution into a convex combination of tree-structured distributions, we obtain an upper bound on the optimal value of the original problem (i.e., the log probability of the MAP assignment) in terms of the combined optimal values of the tree problems. We prove that this upper bound is tight if and only if all the tree distributions share an optimal configuration in common. An important implication is that any such shared configuration must also be a MAP configuration for the original distribution. Next we develop two approaches to attempting to obtain tight upper bounds: a) a tree-relaxed linear program (LP), which is derived from the Lagrangian dual of the upper bounds; and b) a tree-reweighted max-product message-passing algorithm that is related to but distinct from the max-product algorithm. In this way, we establish a connection between a certain LP relaxation of the mode-finding problem and a reweighted form of the max-product (min-sum) message-passing algorithm.

770 citations


Journal ArticleDOI
TL;DR: The main conclusion is that as the number of sensors in the network grows, in-network processing will always use less energy than a centralized algorithm, while maintaining a desired level of accuracy.
Abstract: Wireless sensor networks are capable of collecting an enormous amount of data. Often, the ultimate objective is to estimate a parameter or function from these data, and such estimators are typically the solution of an optimization problem (e.g., maximum likelihood, minimum mean-squared error, or maximum a posteriori). This paper investigates a general class of distributed optimization algorithms for "in-network" data processing, aimed at reducing the amount of energy and bandwidth used for communication. Our intuition tells us that processing the data in-network should, in general, require less energy than transmitting all of the data to a fusion center. In this paper, we address the questions: When, in fact, does in-network processing use less energy, and how much energy is saved? The proposed distributed algorithms are based on incremental optimization methods. A parameter estimate is circulated through the network, and along the way each node makes a small gradient descent-like adjustment to the estimate based only on its local data. Applying results from the theory of incremental subgradient optimization, we find that the distributed algorithms converge to an approximate solution for a broad class of problems. We extend these results to the case where the optimization variable is quantized before being transmitted to the next node and find that quantization does not affect the rate of convergence. Bounds on the number of incremental steps required for a certain level of accuracy provide insight into the tradeoff between estimation performance and communication overhead. Our main conclusion is that as the number of sensors in the network grows, in-network processing will always use less energy than a centralized algorithm, while maintaining a desired level of accuracy.

419 citations


Journal ArticleDOI
TL;DR: Based on the super-Gaussian statistical model, computationally efficient maximum a posteriori speech estimators are derived, which outperform the commonly applied Ephraim-Malah algorithm.
Abstract: This contribution presents two spectral amplitude estimators for acoustical background noise suppression based on maximum a posteriori estimation and super-Gaussian statistical modelling of the speech DFT amplitudes. The probability density function of the speech spectral amplitude is modelled with a simple parametric function, which allows a high approximation accuracy for Laplace- or Gamma-distributed real and imaginary parts of the speech DFT coefficients. Also, the statistical model can be adapted to optimally fit the distribution of the speech spectral amplitudes for a specific noise reduction system. Based on the super-Gaussian statistical model, computationally efficient maximum a posteriori speech estimators are derived, which outperform the commonly applied Ephraim-Malah algorithm.

343 citations


Journal ArticleDOI
TL;DR: The ReML approach proved useful as the regularisation (or influence of the a priori source covariance) increased as the noise level increased, and the localisation error was negligible when accurate location priors were used.

207 citations


Proceedings ArticleDOI
07 Aug 2005
TL;DR: An algorithm to estimate simultaneously both mean and variance of a non parametric regression problem which can be solved via Newton's method is presented and is able to estimate variance locally unlike standard Gaussian Process regression or SVMs.
Abstract: This paper presents an algorithm to estimate simultaneously both mean and variance of a non parametric regression problem. The key point is that we are able to estimate variance locally unlike standard Gaussian Process regression or SVMs. This means that our estimator adapts to the local noise. The problem is cast in the setting of maximum a posteriori estimation in exponential families. Unlike previous work, we obtain a convex optimization problem which can be solved via Newton's method.

194 citations


Journal ArticleDOI
TL;DR: This work developed an alternating minimization scheme based on a maximum a posteriori estimation with a priori distribution of blurs derived from the multichannel framework and a priora distribution of original images defined by the variational integral to recover the blurs and the original image from channels severely corrupted by noise.
Abstract: Existing multichannel blind restoration techniques assume perfect spatial alignment of channels, correct estimation of blur size, and are prone to noise. We developed an alternating minimization scheme based on a maximum a posteriori estimation with a priori distribution of blurs derived from the multichannel framework and a priori distribution of original images defined by the variational integral. This stochastic approach enables us to recover the blurs and the original image from channels severely corrupted by noise. We observe that the exact knowledge of the blur size is not necessary, and we prove that translation misregistration up to a certain extent can be automatically removed in the restoration process.

162 citations


Journal ArticleDOI
TL;DR: A new method for analyzing low- density parity-check codes and low-density generator-matrix codes under bit maximum a posteriori probability (MAP) decoding is introduced, based on a rigorous approach to spin glasses, which allows one to construct lower bounds on the entropy of the transmitted message conditional to the received one.
Abstract: A new method for analyzing low-density parity-check (LDPC) codes and low-density generator-matrix (LDGM) codes under bit maximum a posteriori probability (MAP) decoding is introduced. The method is based on a rigorous approach to spin glasses developed by Francesco Guerra. It allows one to construct lower bounds on the entropy of the transmitted message conditional to the received one. Based on heuristic statistical mechanics calculations, we conjecture such bounds to be tight. The result holds for standard irregular ensembles when used over binary-input output-symmetric (BIOS) channels. The method is first developed for Tanner-graph ensembles with Poisson left-degree distribution. It is then generalized to "multi-Poisson" graphs, and, by a completion procedure, to arbitrary degree distribution

145 citations


Journal ArticleDOI
TL;DR: The results indicate that the Bayesian inference method can provide accurate point estimates as well as uncertainty quantification to the solution of the inverse radiation problem.

136 citations


Journal ArticleDOI
TL;DR: It is shown that the maximum a posteriori (MAP) symbol detection strategy, usually implemented by using the Forney observation model, can be equivalently implemented based on the samples at the output of a filter matched to the received pulse, i.e.,based on the Ungerboeck observation model.
Abstract: In this letter, the well-known problem of a transmission over an additive white Gaussian noise channel affected by known intersymbol interference is considered. We show that the maximum a posteriori (MAP) symbol detection strategy, usually implemented by using the Forney observation model, can be equivalently implemented based on the samples at the output of a filter matched to the received pulse, i.e., based on the Ungerboeck observation model. Although interesting from a conceptual viewpoint, the derived algorithm has a practical relevance in turbo equalization schemes for partial response signalling, where the implementation of a whitening filter can be avoided.

123 citations


Journal ArticleDOI
TL;DR: Novel supervised algorithms for the CCP and the CPP estimations are proposed which are appropriate for remote sensing images where the estimation process might to be done in high-dimensional spaces and results show that the proposed density estimation algorithm outperforms other algorithms forRemote sensing data over a wide range of spectral dimensions.
Abstract: A complete framework is proposed for applying the maximum a posteriori (MAP) estimation principle in remote sensing image segmentation. The MAP principle provides an estimate for the segmented image by maximizing the posterior probabilities of the classes defined in the image. The posterior probability can be represented as the product of the class conditional probability (CCP) and the class prior probability (CPP). In this paper, novel supervised algorithms for the CCP and the CPP estimations are proposed which are appropriate for remote sensing images where the estimation process might to be done in high-dimensional spaces. For the CCP, a supervised algorithm which uses the support vector machines (SVM) density estimation approach is proposed. This algorithm uses a novel learning procedure, derived from the main field theory, which avoids the (hard) quadratic optimization problem arising from the traditional formulation of the SVM density estimation. For the CPP estimation, Markov random field (MRF) is a common choice which incorporates contextual and geometrical information in the estimation process. Instead of using predefined values for the parameters of the MRF, an analytical algorithm is proposed which automatically identifies the values of the MRF parameters. The proposed framework is built in an iterative setup which refines the estimated image to get the optimum solution. Experiments using both synthetic and real remote sensing data (multispectral and hyperspectral) show the powerful performance of the proposed framework. The results show that the proposed density estimation algorithm outperforms other algorithms for remote sensing data over a wide range of spectral dimensions. The MRF modeling raises the segmentation accuracy by up to 10% in remote sensing images.

Journal ArticleDOI
TL;DR: The Partition Rescaling and Shift Algorithm (PARSA) as mentioned in this paper is based on a maximum a posteriori approach in which an optimal estimate of a 2D wave spectrum is calculated given a measured SAR look cross spectrum (SLCS) and additional prior knowledge.
Abstract: [1] A parametric inversion scheme for the retrieval of two-dimensional (2-D) ocean wave spectra from look cross spectra acquired by spaceborne synthetic aperture radar (SAR) is presented. The scheme uses SAR observations to adjust numerical wave model spectra. The Partition Rescaling and Shift Algorithm (PARSA) is based on a maximum a posteriori approach in which an optimal estimate of a 2-D wave spectrum is calculated given a measured SAR look cross spectrum (SLCS) and additional prior knowledge. The method is based on explicit models for measurement errors as well as on uncertainties in the SAR imaging model and the model wave spectra used as prior information. Parameters of the SAR imaging model are estimated as part of the retrieval. Uncertainties in the prior wave spectrum are expressed in terms of transformation variables, which are defined for each wave system in the spectrum, describing rotations and rescaling of wave numbers and energy as well as changes of directional spreading. Technically, the PARSA wave spectra retrieval is based on the minimization of a cost function. A Levenberg-Marquardt method is used to find a numerical solution. The scheme is tested using both simulated SLCS and ERS-2 SAR data. It is demonstrated that the algorithm makes use of the phase information contained in SLCS, which is of particular importance for multimodal sea states. Statistics are presented for a global data set of 11,000 ERS-2 SAR wave mode SLCS acquired in southern winter 1996.

Journal ArticleDOI
TL;DR: It is shown that with the periodic boundary condition, the high-resolution image can be restored efficiently by using fast Fourier transforms and the preconditioned conjugate gradient method is applied.
Abstract: In this paper, we study the problem of reconstructing a high-resolution image from several blurred low-resolution image frames. The image frames consist of decimated, blurred and noisy versions of the high-resolution image. The high-resolution image is modeled as a Markov random field (MRF), and a maximum a posteriori (MAP) estimation technique is used for the restoration. We show that with the periodic boundary condition, the high-resolution image can be restored efficiently by using fast Fourier transforms. We also apply the preconditioned conjugate gradient method to restore the high-resolution image. Computer simulations are given to illustrate the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: Upper and lower mutual information thresholds are stated for per-bit maximum a posteriori probability (MAP) decoding and low-density parity-check (LDPC) code ensembles.
Abstract: Extreme densities for information combining are found for two important channel models: the binary-input symmetric parallel broadcast channel and the parity-constrained-input symmetric parallel channels. Following, upper and lower mutual information thresholds are stated for per-bit maximum a posteriori probability (MAP) decoding and low-density parity-check (LDPC) code ensembles.

Journal ArticleDOI
TL;DR: In this article, the problem of finding the maximum weight matching (MWM) in a weighted complete bipartite graph was considered and the max-product algorithm was used to solve it.
Abstract: Max-product "belief propagation" is an iterative, local, message-passing algorithm for finding the maximum a posteriori (MAP) assignment of a discrete probability distribution specified by a graphical model. Despite the spectacular success of the algorithm in many application areas such as iterative decoding, computer vision and combinatorial optimization which involve graphs with many cycles, theoretical results about both correctness and convergence of the algorithm are known in few cases (Weiss-Freeman Wainwright, Yeddidia-Weiss-Freeman, Richardson-Urbanke}. In this paper we consider the problem of finding the Maximum Weight Matching (MWM) in a weighted complete bipartite graph. We define a probability distribution on the bipartite graph whose MAP assignment corresponds to the MWM. We use the max-product algorithm for finding the MAP of this distribution or equivalently, the MWM on the bipartite graph. Even though the underlying bipartite graph has many short cycles, we find that surprisingly, the max-product algorithm always converges to the correct MAP assignment as long as the MAP assignment is unique. We provide a bound on the number of iterations required by the algorithm and evaluate the computational cost of the algorithm. We find that for a graph of size $n$, the computational cost of the algorithm scales as $O(n^3)$, which is the same as the computational cost of the best known algorithm. Finally, we establish the precise relation between the max-product algorithm and the celebrated {\em auction} algorithm proposed by Bertsekas. This suggests possible connections between dual algorithm and max-product algorithm for discrete optimization problems.

Posted Content
TL;DR: In this paper, an area theorem for transmission over general memoryless channels is introduced and some of its many consequences are discussed, including an upper bound on the maximum a posteriori threshold for sparse graph codes.
Abstract: There is a fundamental relationship between belief propagation and maximum a posteriori decoding. The case of transmission over the binary erasure channel was investigated in detail in a companion paper. This paper investigates the extension to general memoryless channels (paying special attention to the binary case). An area theorem for transmission over general memoryless channels is introduced and some of its many consequences are discussed. We show that this area theorem gives rise to an upper-bound on the maximum a posteriori threshold for sparse graph codes. In situations where this bound is tight, the extrinsic soft bit estimates delivered by the belief propagation decoder coincide with the correct a posteriori probabilities above the maximum a posteriori threshold. More generally, it is conjectured that the fundamental relationship between the maximum a posteriori and the belief propagation decoder which was observed for transmission over the binary erasure channel carries over to the general case. We finally demonstrate that in order for the design rate of an ensemble to approach the capacity under belief propagation decoding the component codes have to be perfectly matched, a statement which is well known for the special case of transmission over the binary erasure channel.

Journal ArticleDOI
TL;DR: A new gridding algorithm is proposed for determining the individual spots and their borders of the Gaussian mixture model (GMM) and the main advantages of the proposed methodology are modeling flexibility and adaptability to the data, which are well-known strengths of GMM.
Abstract: In this paper, we propose a new methodology for analysis of microarray images. First, a new gridding algorithm is proposed for determining the individual spots and their borders. Then, a Gaussian mixture model (GMM) approach is presented for the analysis of the individual spot images. The main advantages of the proposed methodology are modeling flexibility and adaptability to the data, which are well-known strengths of GMM. The maximum likelihood and maximum a posteriori approaches are used to estimate the GMM parameters via the expectation maximization algorithm. The proposed approach has the ability to detect and compensate for artifacts that might occur in microarray images. This is accomplished by a model-based criterion that selects the number of the mixture components. We present numerical experiments with artificial and real data where we compare the proposed approach with previous ones and existing software tools for microarray image analysis and demonstrate its advantages.

Journal ArticleDOI
TL;DR: The analysis and simulations indicate that the postprocessing method is inferior, unless the noise correlations between neighboring pixels are taken into account, and it seems that MAP reconstruction is the more efficient method.
Abstract: Previously, the noise characteristics obtained with penalized-likelihood reconstruction [or maximum a posteriori (MAP)] have been compared to those obtained with postsmoothed maximum-likelihood (ML) reconstruction, for emission tomography applications requiring uniform resolution. It was found that penalized-likelihood reconstruction was not superior to postsmoothed ML. In this paper, a similar comparison is made, but now for applications where the noise suppression is tuned with anatomical information. It is assumed that limited but exact anatomical information is available. Two methods were compared. In the first method, the anatomical information is incorporated in the prior of a MAP-algorithm and is, therefore, imposed during MAP-reconstruction. The second method starts from an unconstrained ML-reconstruction, and imposes the anatomical information in a postprocessing step. The theoretical analysis was verified with simulations: small lesions were inserted in two different objects, and noisy PET data were produced and reconstructed with both methods. The resulting images were analyzed with bias-noise curves, and by computing the detection performance of the nonprewhitening observer and a channelized Hotelling observer. Our analysis and simulations indicate that the postprocessing method is inferior, unless the noise correlations between neighboring pixels are taken into account. This can be done by applying a so-called prewhitening filter. However, because the prewhitening filter is shift variant and object dependent, it seems that MAP reconstruction is the more efficient method.

Proceedings Article
30 Jul 2005
TL;DR: A new class of networks with bounded width is defined, and a new decision problem is introduced for Bayesian networks, the maximin a posteriori.
Abstract: This paper presents new results on the complexity of graph-theoretical models that represent probabilities (Bayesian networks) and that represent interval and set valued probabilities (credal networks). We define a new class of networks with bounded width, and introduce a new decision problem for Bayesian networks, the maximin a posteriori. We present new links between the Bayesian and credal networks, and present new results both for Bayesian networks (most probable explanation with observations, maximin a posteriori) and for credal networks (bounds on probabilities a posteriori, most probable explanation with and without observations, maximum a posteriori).

Journal ArticleDOI
04 Apr 2005
TL;DR: The main advantage of the new method over the existing techniques is that it suppresses speckle noise well, while retaining the structure of the image, particularly the thin bright streaks, which tend to occur along boundaries between tissue layers.
Abstract: The authors present a statistical approach to speckle reduction in medical ultrasound B-scan images based on maximum a posteriori (MAP) estimation in the wavelet domain. In this framework, a new class of statistical model for speckle noise is proposed to obtain a simple and tractable solution in a closed analytical form. The proposed method uses the Rayleigh distribution for speckle noise and a Gaussian distribution for modelling the statistics of wavelet coefficients in a logarithmically transformed ultrasound image. The method combines the MAP estimation with the assumption that speckle is spatially correlated within a small window and designs a locally adaptive Bayesian processor whose parameters are computed from the neighboring coefficients. Further, the locally adaptive estimator is extended to the redundant wavelet representation, which yields better results than the decimated wavelet transform. The experimental results show that the proposed method clearly outperforms the state-of-the-art medical image denoising algorithm of Pizurica et al., spatially adaptive single-resolution methods and band-adaptive multi-scale soft-thresholding techniques in terms of quantitative performance as well as in terms of visual quality of the images. The main advantage of the new method over the existing techniques is that it suppresses speckle noise well, while retaining the structure of the image, particularly the thin bright streaks, which tend to occur along boundaries between tissue layers.

Journal ArticleDOI
TL;DR: A unified framework is proposed, based on the maximum a posteriori probability principle, by taking all these effects into account simultaneously in order to improve image segmentation performance of brain magnetic resonance (MR) images.
Abstract: Noise, partial volume (PV) effect, and image-intensity inhomogeneity render a challenging task for segmentation of brain magnetic resonance (MR) images. Most of the current MR image segmentation methods focus on only one or two of the above-mentioned effects. The objective of this paper is to propose a unified framework, based on the maximum a posteriori probability principle, by taking all these effects into account simultaneously in order to improve image segmentation performance. Instead of labeling each image voxel with a unique tissue type, the percentage of each voxel belonging to different tissues, which we call a mixture, is considered to address the PV effect. A Markov random field model is used to describe the noise effect by considering the nearby spatial information of the tissue mixture. The inhomogeneity effect is modeled as a bias field characterized by a zero mean Gaussian prior probability. The well-known fuzzy C-mean model is extended to define the likelihood function of the observed image. This framework reduces theoretically, under some assumptions, to the adaptive fuzzy C-mean (AFCM) algorithm proposed by Pham and Prince. Digital phantom and real clinical MR images were used to test the proposed framework. Improved performance over the AFCM algorithm was observed in a clinical environment where the inhomogeneity, noise level, and PV effect are commonly encountered.

Book ChapterDOI
03 Oct 2005
TL;DR: The conjugate distribution for one dependence estimators is developed and empirically show that uniform averaging is clearly superior to Bayesian model averaging for this family of models and the maximum a posteriori linear mixture weights improve accuracy significantly over uniform aggregation.
Abstract: Ensemble classifiers combine the classification results of several classifiers. Simple ensemble methods such as uniform averaging over a set of models usually provide an improvement over selecting the single best model. Usually probabilistic classifiers restrict the set of possible models that can be learnt in order to lower computational complexity costs. In these restricted spaces, where incorrect modeling assumptions are possibly made, uniform averaging sometimes performs even better than bayesian model averaging. Linear mixtures over sets of models provide an space that includes uniform averaging as a particular case. We develop two algorithms for learning maximum a posteriori weights for linear mixtures, based on expectation maximization and on constrained optimizition. We provide a nontrivial example of the utility of these two algorithms by applying them for one dependence estimators. We develop the conjugate distribution for one dependence estimators and empirically show that uniform averaging is clearly superior to Bayesian model averaging for this family of models. After that we empirically show that the maximum a posteriori linear mixture weights improve accuracy significantly over uniform aggregation.

Journal ArticleDOI
TL;DR: In this article, the authors presented maximum likelihood, Bayes, and empirical Bayes estimators of the truncated first moment and hazard function of the Maxwell distribution and compared the relative efficiency of these estimators via a Monte Carlo simulation study.
Abstract: This article presents maximum likelihood, Bayes, and empirical Bayes estimators of the truncated first moment and hazard function of the Maxwell distribution. A comparison of the relative efficiency of these three estimators is performed via a Monte Carlo simulation study.

Proceedings ArticleDOI
17 Oct 2005
TL;DR: In this work, modes of the likelihood function are found using efficient example-based matching followed by local refinement to find peaks and estimate peak bandwidth, and an estimate of the full posterior model is obtained by reweighting these peaks according to the temporal prior.
Abstract: Classic methods for Bayesian inference effectively constrain search to lie within regions of significant probability of the temporal prior. This is efficient with an accurate dynamics model, but otherwise is prone to ignore significant peaks in the true posterior. A more accurate posterior estimate can be obtained by explicitly finding modes of the likelihood function and combining them with a weak temporal prior. In our approach, modes are found using efficient example-based matching followed by local refinement to find peaks and estimate peak bandwidth. By reweighting these peaks according to the temporal prior we obtain an estimate of the full posterior model. We show comparative results on real and synthetic images in a high degree of freedom articulated tracking task.

Journal ArticleDOI
TL;DR: An analytical expression is derived for calculating artefacts in a reconstructed image that are caused by errors in the system matrix using the first-order Taylor series approximation to determine the required minimum accuracy of the system Matrix in emission tomography.
Abstract: Statistically based iterative image reconstruction methods have been developed for emission tomography. One important component in iterative image reconstruction is the system matrix, which defines the mapping from the image space to the data space. Several groups have demonstrated that an accurate system matrix can improve image quality in both single photon emission computed tomography (SPECT) and positron emission tomography (PET). While iterative methods are amenable to arbitrary and complicated system models, the true system response is never known exactly. In practice, one also has to sacrifice the accuracy of the system model because of limited computing and imaging resources. This paper analyses the effect of errors in the system matrix on iterative image reconstruction methods that are based on the maximum a posteriori principle. We derived an analytical expression for calculating artefacts in a reconstructed image that are caused by errors in the system matrix using the first-order Taylor series approximation. The theoretical expression is used to determine the required minimum accuracy of the system matrix in emission tomography. Computer simulations show that the theoretical results work reasonably well in low-noise situations.

Journal ArticleDOI
TL;DR: It is shown that the SGM can be extended to such penalized ML objective functions, allowing for new algorithms leading to maximum a posteriori stable solutions, and various classical penalization-regularization terms are introduced to impose a smoothness property on the solution.
Abstract: We consider the problem of restoring astronomical images acquired with charge coupled device cameras. The astronomical object is first blurred by the point spread function of the instrument-atmosphere set. The resulting convolved image is corrupted by a Poissonian noise due to low light intensity, then, a Gaussian white noise is added during the electronic read-out operation. We show first that the split gradient method (SGM) previously proposed can be used to obtain maximum likelihood (ML) iterative algorithms adapted in such noise combinations. However, when ML algorithms are used for image restoration, whatever the noise process is, instabilities due to noise amplification appear when the iteration number increases. To avoid this drawback and to obtain physically meaningful solutions, we introduce various classical penalization-regularization terms to impose a smoothness property on the solution. We show that the SGM can be extended to such penalized ML objective functions, allowing us to obtain new algorithms leading to maximum a posteriori stable solutions. The proposed algorithms are checked on typical astronomical images and the choice of the penalty function is discussed following the kind of object.

Journal ArticleDOI
TL;DR: The optimal diversity-combining technique is investigated for a multipath Rayleigh fading channel with imperfect channel state information at the receiver, and the bit-error performance using the optimal diversity combining is derived and compared with that of the suboptimal application of maximal ratio combining.
Abstract: The optimal diversity-combining technique is investigated for a multipath Rayleigh fading channel with imperfect channel state information at the receiver. Applying minimum mean-square error channel estimation, the channel state can be decomposed into the channel estimator spanned by channel observation, and the estimation error orthogonal to channel observation. The optimal combining weight is obtained from the first principle of maximum a posteriori detection, taking into consideration the imperfect channel estimation. The bit-error performance using the optimal diversity combining is derived and compared with that of the suboptimal application of maximal ratio combining. Numerical results are presented for specific channel models and estimation methods to illustrate the combined effect of channel estimation and detection on bit-error rate performance.

Journal ArticleDOI
TL;DR: An analysis of the fully correlated approach to the simultaneous localization and map building (SLAM) problem from a control systems theory point of view, both for linear and nonlinear vehicle models, allowing the formulation of measurement models that make SLAM observable.
Abstract: This paper presents an analysis of the fully correlated approach to the simultaneous localization and map building (SLAM) problem from a control systems theory point of view, both for linear and nonlinear vehicle models. We show how partial observability hinders full reconstructibility of the state space, making the final map estimate dependent on the initial observations. Nevertheless, marginal filter stability guarantees convergence of the state error covariance to a positive semidefinite covariance matrix. By characterizing the form of the total Fisher information, we are able to determine the unobservable state space directions. Moreover, we give a closed-form expression that links the amount of reconstruction error to the number of landmarks used. The analysis allows the formulation of measurement models that make SLAM observable.

Journal ArticleDOI
TL;DR: In the evaluation of continuous speech recognition using decision tree HMMs, the PIC criterion outperforms ML and MDL criteria in building a compact tree structure with moderate tree size and higher recognition rate.
Abstract: This paper surveys a series of model selection approaches and presents a novel predictive information criterion (PIC) for hidden Markov model (HMM) selection. The approximate Bayesian using Viterbi approach is applied for PIC selection of the best HMMs providing the largest prediction information for generalization of future data. When the perturbation of HMM parameters is expressed by a product of conjugate prior densities, the segmental prediction information is derived at the frame level without Laplacian integral approximation. In particular, a multivariate t distribution is attained to characterize the prediction information corresponding to HMM mean vector and precision matrix. When performing model selection in tree structure HMMs, we develop a top-down prior/posterior propagation algorithm for estimation of structural hyperparameters. The prediction information is determined so as to choose the best HMM tree model. Different from maximum likelihood (ML) and minimum description length (MDL) selection criteria, the parameters of PIC chosen HMMs are computed via maximum a posteriori estimation. In the evaluation of continuous speech recognition using decision tree HMMs, the PIC criterion outperforms ML and MDL criteria in building a compact tree structure with moderate tree size and higher recognition rate.

Journal ArticleDOI
TL;DR: The results showed that the proposed method, named GNDShrink, yielded a signal-to-noise ratio (SNR) gain of 0.42 dB over the best state-of-the-art despeckling method reported in the literature, while preserving the texture and organ surfaces.
Abstract: Most existing wavelet-based image denoising techniques are developed for additive white Gaussian noise. In applications to speckle reduction in medical ultrasound (US) images, the traditional approach is first to perform the logarithmic transform (homomorphic processing) to convert the multiplicative speckle noise model to an additive one, and then the wavelet filtering is performed on the log-transformed image, followed by an exponential operation. However, this non-linear operation leads to biased estimation of the signal and increases the computational complexity of the filtering method. To overcome these drawbacks, an efficient, non-homomorphic technique for speckle reduction in medical US images is proposed. The method relies on the true characterisation of the marginal statistics of the signal and speckle wavelet coefficients. The speckle component was modelled using the generalised Nakagami distribution, which is versatile enough to model the speckle statistics under various scattering conditions of interest in medical US images. By combining this speckle model with the generalised Gaussian signal first, the Bayesian shrinkage functions were derived using the maximum a posteriori (MAP) criterion. The resulting Bayesian processor used the local image statistics to achieve soft-adaptation from homogeneous to highly heterogeneous areas. Finally, the results showed that the proposed method, named GNDShrink, yielded a signal-to-noise ratio (SNR) gain of 0.42 dB over the best state-of-the-art despeckling method reported in the literature, 1.73 dB over the Lee filter and 1.31 dB over the Kaun filter at an input SNR of 12.0 dB, when tested on a US image. Further, the visual comparison of despeckled US images indicated that the new method suppressed the speckle noise well, while preserving the texture and organ surfaces.