scispace - formally typeset
Search or ask a question

Showing papers on "Maximum a posteriori estimation published in 2012"


Journal ArticleDOI
TL;DR: Developments that reduce the computational costs of the underlying maximum a posteriori (MAP) algorithm, as well as statistical considerations that yield new insights into the accuracy with which the relative orientations of individual particles may be determined are described.

4,554 citations


Journal ArticleDOI
TL;DR: A Bayesian interpretation of cryo-EM structure determination is described, where smoothness in the reconstructed density is imposed through a Gaussian prior in the Fourier domain, so that the optimal 3D linear filter is obtained without the need for arbitrariness and objective resolution estimates may be obtained.

760 citations


Journal ArticleDOI
TL;DR: In this article, a Bayesian maximum a posteriori (MAP) approach is presented, where a subset of highly correlated and quiet stars is used to generate a cotrending basis vector set, which is in turn used to establish a range of "reasonable" robust fit parameters.
Abstract: With the unprecedented photometric precision of the Kepler spacecraft, significant systematic and stochastic errors on transit signal levels are observable in the Kepler photometric data. These errors, which include discontinuities, outliers, systematic trends, and other instrumental signatures, obscure astrophysical signals. The presearch data conditioning (PDC) module of the Kepler data analysis pipeline tries to remove these errors while preserving planet transits and other astrophysically interesting signals. The completely new noise and stellar vari- ability regime observed inKepler data poses a significant problem to standard cotrending methods. Variable stars are often of particular astrophysical interest, so the preservation of their signals is of significant importance to the astrophysical community. We present a Bayesian maximum a posteriori (MAP) approach, where a subset of highly correlated and quiet stars is used to generate a cotrending basis vector set, which is in turn used to establish a range of "reasonable" robust fit parameters. These robust fit parameters are then used to generate a Bayesian prior and a Bayesian posterior probability distribution function (PDF) which, when maximized, finds the best fit that simulta- neously removes systematic effects while reducing the signal distortion and noise injection that commonly afflicts simple least-squares (LS) fitting. A numerical and empirical approach is taken where the Bayesian prior PDFs are generated from fits to the light-curve distributions themselves.

721 citations


Journal ArticleDOI
TL;DR: This paper introduces a new supervised segmentation algorithm for remotely sensed hyperspectral image data which integrates the spectral and spatial information in a Bayesian framework and represents an innovative contribution in the literature.
Abstract: This paper introduces a new supervised segmentation algorithm for remotely sensed hyperspectral image data which integrates the spectral and spatial information in a Bayesian framework. A multinomial logistic regression (MLR) algorithm is first used to learn the posterior probability distributions from the spectral information, using a subspace projection method to better characterize noise and highly mixed pixels. Then, contextual information is included using a multilevel logistic Markov-Gibbs Markov random field prior. Finally, a maximum a posteriori segmentation is efficiently computed by the min-cut-based integer optimization algorithm. The proposed segmentation approach is experimentally evaluated using both simulated and real hyperspectral data sets, exhibiting state-of-the-art performance when compared with recently introduced hyperspectral image classification methods. The integration of subspace projection methods with the MLR algorithm, combined with the use of spatial-contextual information, represents an innovative contribution in the literature. This approach is shown to provide accurate characterization of hyperspectral imagery in both the spectral and the spatial domain.

678 citations


Journal ArticleDOI
TL;DR: A comparison with recent implementations of path sampling and stepping-stone sampling shows reassuringly that MAP identification and its Bayes factor provide similar performance to PS and SS and that these approaches considerably outperform HME, sHME, and AICM in selecting the correct underlying clock model.
Abstract: Recent implementations of path sampling (PS) and stepping-stone sampling (SS) have been shown to outperform the harmonic mean estimator (HME) and a posterior simulation-based analog of Akaike’s information criterion through Markov chain Monte Carlo (AICM), in Bayesian model selection of demographic and molecular clock models. Almost simultaneously, a Bayesian model averaging approach was developed that avoids conditioning on a single model but averages over a set of relaxed clock models. This approach returns estimates of the posterior probability of each clock model through which one can estimate the Bayes factor in favor of the maximum a posteriori (MAP) clock model; however, this Bayes factor estimate may suffer when the posterior probability of the MAP model approaches 1. Here, we compare these two recent developments with the HME, stabilized/smoothed HME (sHME), and AICM, using both synthetic and empirical data. Our comparison shows reassuringly that MAP identification and its Bayes factor provide similar performance to PS and SS and that these approaches considerably outperform HME, sHME, and AICM in selecting the correct underlying clock model. We also illustrate the importance of using proper priors on a large set of empirical data sets.

556 citations


Journal ArticleDOI
TL;DR: Thorough experimental results suggest that the proposed SR method can reconstruct higher quality results both quantitatively and perceptually and propose a maximum a posteriori probability framework for SR recovery.
Abstract: Image super-resolution (SR) reconstruction is essentially an ill-posed problem, so it is important to design an effective prior. For this purpose, we propose a novel image SR method by learning both non-local and local regularization priors from a given low-resolution image. The non-local prior takes advantage of the redundancy of similar patches in natural images, while the local prior assumes that a target pixel can be estimated by a weighted average of its neighbors. Based on the above considerations, we utilize the non-local means filter to learn a non-local prior and the steering kernel regression to learn a local prior. By assembling the two complementary regularization terms, we propose a maximum a posteriori probability framework for SR recovery. Thorough experimental results suggest that the proposed SR method can reconstruct higher quality results both quantitatively and perceptually.

527 citations


Journal ArticleDOI
TL;DR: In this article, a Bayesian Maximum A Posteriori (MAP) approach is presented where a subset of highly correlated and quiet stars is used to generate a cotrending basis vector set which is in turn used to establish a range of "reasonable" robust fit parameters.
Abstract: With the unprecedented photometric precision of the Kepler Spacecraft, significant systematic and stochastic errors on transit signal levels are observable in the Kepler photometric data. These errors, which include discontinuities, outliers, systematic trends and other instrumental signatures, obscure astrophysical signals. The Presearch Data Conditioning (PDC) module of the Kepler data analysis pipeline tries to remove these errors while preserving planet transits and other astrophysically interesting signals. The completely new noise and stellar variability regime observed in Kepler data poses a significant problem to standard cotrending methods such as SYSREM and TFA. Variable stars are often of particular astrophysical interest so the preservation of their signals is of significant importance to the astrophysical community. We present a Bayesian Maximum A Posteriori (MAP) approach where a subset of highly correlated and quiet stars is used to generate a cotrending basis vector set which is in turn used to establish a range of "reasonable" robust fit parameters. These robust fit parameters are then used to generate a Bayesian Prior and a Bayesian Posterior Probability Distribution Function (PDF) which when maximized finds the best fit that simultaneously removes systematic effects while reducing the signal distortion and noise injection which commonly afflicts simple least-squares (LS) fitting. A numerical and empirical approach is taken where the Bayesian Prior PDFs are generated from fits to the light curve distributions themselves.

520 citations


Journal ArticleDOI
TL;DR: A dual mathematical interpretation of the proposed framework with a structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared with traditional sparse inverse problem techniques.
Abstract: A general framework for solving image inverse problems with piecewise linear estimations is introduced in this paper. The approach is based on Gaussian mixture models, which are estimated via a maximum a posteriori expectation-maximization algorithm. A dual mathematical interpretation of the proposed framework with a structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared with traditional sparse inverse problem techniques. We demonstrate that, in a number of image inverse problems, including interpolation, zooming, and deblurring of narrow kernels, the same simple and computationally efficient algorithm yields results in the same ballpark as that of the state of the art.

505 citations


Posted Content
TL;DR: Using the insights gained from this comparative study, it is shown how accurate topic models can be learned in several seconds on text corpora with thousands of documents.
Abstract: Latent Dirichlet analysis, or topic modeling, is a flexible latent variable framework for modeling high-dimensional sparse count data. Various learning algorithms have been developed in recent years, including collapsed Gibbs sampling, variational inference, and maximum a posteriori estimation, and this variety motivates the need for careful empirical comparisons. In this paper, we highlight the close connections between these approaches. We find that the main differences are attributable to the amount of smoothing applied to the counts. When the hyperparameters are optimized, the differences in performance among the algorithms diminish significantly. The ability of these algorithms to achieve solutions of comparable accuracy gives us the freedom to select computationally efficient approaches. Using the insights gained from this comparative study, we show how accurate topic models can be learned in several seconds on text corpora with thousands of documents.

496 citations


Journal ArticleDOI
01 Nov 2012
TL;DR: The idea is to formulate the registration problem in a Maximum A Posteriori (MAP) framework and iteratively register a 3D articulated human body model with monocular depth cues via linear system solvers.
Abstract: We present a fast, automatic method for accurately capturing full-body motion data using a single depth camera. At the core of our system lies a realtime registration process that accurately reconstructs 3D human poses from single monocular depth images, even in the case of significant occlusions. The idea is to formulate the registration problem in a Maximum A Posteriori (MAP) framework and iteratively register a 3D articulated human body model with monocular depth cues via linear system solvers. We integrate depth data, silhouette information, full-body geometry, temporal pose priors, and occlusion reasoning into a unified MAP estimation framework. Our 3D tracking process, however, requires manual initialization and recovery from failures. We address this challenge by combining 3D tracking with 3D pose detection. This combination not only automates the whole process but also significantly improves the robustness and accuracy of the system. Our whole algorithm is highly parallel and is therefore easily implemented on a GPU. We demonstrate the power of our approach by capturing a wide range of human movements in real time and achieve state-of-the-art accuracy in our comparison against alternative systems such as Kinect [2012].

240 citations


Journal ArticleDOI
TL;DR: It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an -dimensional vector “decouples” as scalar postulatedMAP estimators is shown to be correct.
Abstract: The replica method is a nonrigorous but well-known technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method, under the assumption of replica symmetry, to study estimators that are maximum a posteriori (MAP) under a postulated prior distribution. It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an -dimensional vector “decouples” as scalar postulated MAP estimators. The result is based on applying a hardening argument to the replica analysis of postulated posterior mean estimators of Tanaka and of Guo and Verdu. The replica-symmetric postulated MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, least absolute shrinkage and selection operator (LASSO), linear estimation with thresholding, and zero norm-regularized estimation. In the case of LASSO estimation, the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation, it reduces to a hard threshold. Among other benefits, the replica method provides a computationally tractable method for precisely predicting various performance metrics including mean-squared error and sparsity pattern recovery probability.

Journal ArticleDOI
TL;DR: It is proved that under a certain condition, when the kernel size in correntropy is larger than some value, the MC estimation will have a unique optimal solution lying in a strictly concave region of the smoothed posterior distribution.
Abstract: As a new measure of similarity, the correntropy can be used as an objective function for many applications. In this letter, we study Bayesian estimation under maximum correntropy (MC) criterion. We show that the MC estimation is, in essence, a smoothed maximum a posteriori (MAP) estimation, including the MAP and the minimum mean square error (MMSE) estimation as the extreme cases. We also prove that under a certain condition, when the kernel size in correntropy is larger than some value, the MC estimation will have a unique optimal solution lying in a strictly concave region of the smoothed posterior distribution.

Proceedings Article
03 Dec 2012
TL;DR: This work addresses the problem of generating multiple hypotheses for structured prediction tasks that involve interaction with users or successive components in a cascaded architecture by formulating this task as a multiple-output structured-output prediction problem with a loss-function that effectively captures the setup of the problem.
Abstract: We address the problem of generating multiple hypotheses for structured prediction tasks that involve interaction with users or successive components in a cascaded architecture. Given a set of multiple hypotheses, such components/users typically have the ability to retrieve the best (or approximately the best) solution in this set. The standard approach for handling such a scenario is to first learn a single-output model and then produce M-Best Maximum a Posteriori (MAP) hypotheses from this model. In contrast, we learn to produce multiple outputs by formulating this task as a multiple-output structured-output prediction problem with a loss-function that effectively captures the setup of the problem. We present a max-margin formulation that minimizes an upper-bound on this loss-function. Experimental results on image segmentation and protein side-chain prediction show that our method outperforms conventional approaches used for this type of scenario and leads to substantial improvements in prediction accuracy.

Journal ArticleDOI
TL;DR: A joint learning technique is applied to train two projection matrices simultaneously and to map the original LR and HR feature spaces onto a unified feature subspace to overcome or at least to reduce the problem for NE-based SR reconstruction.
Abstract: The neighbor-embedding (NE) algorithm for single-image super-resolution (SR) reconstruction assumes that the feature spaces of low-resolution (LR) and high-resolution (HR) patches are locally isometric. However, this is not true for SR because of one-to-many mappings between LR and HR patches. To overcome or at least to reduce the problem for NE-based SR reconstruction, we apply a joint learning technique to train two projection matrices simultaneously and to map the original LR and HR feature spaces onto a unified feature subspace. Subsequently, the k -nearest neighbor selection of the input LR image patches is conducted in the unified feature subspace to estimate the reconstruction weights. To handle a large number of samples, joint learning locally exploits a coupled constraint by linking the LR-HR counterparts together with the K-nearest grouping patch pairs. In order to refine further the initial SR estimate, we impose a global reconstruction constraint on the SR outcome based on the maximum a posteriori framework. Preliminary experiments suggest that the proposed algorithm outperforms NE-related baselines.

Book ChapterDOI
07 Oct 2012
TL;DR: A new method for recovering the blur kernel in motion-blurred images based on statistical irregularities their power spectrum exhibits is described, achieved by a power-law that refines the one traditionally used for describing natural images.
Abstract: We describe a new method for recovering the blur kernel in motion-blurred images based on statistical irregularities their power spectrum exhibits. This is achieved by a power-law that refines the one traditionally used for describing natural images. The new model better accounts for biases arising from the presence of large and strong edges in the image. We use this model together with an accurate spectral whitening formula to estimate the power spectrum of the blur. The blur kernel is then recovered using a phase retrieval algorithm with improved convergence and disambiguation capabilities. Unlike many existing methods, the new approach does not perform a maximum a posteriori estimation, which involves repeated reconstructions of the latent image, and hence offers attractive running times. We compare the new method with state-of-the-art methods and report various advantages, both in terms of efficiency and accuracy.

Journal ArticleDOI
TL;DR: A maximum a posteriori (MAP) based multi-frame super-resolution algorithm for hyperspectral images and principal component analysis (PCA) is utilized in both parts of the proposed algorithm: motion estimation and image reconstruction.

Journal ArticleDOI
TL;DR: A new interpolation-based method of image super-resolution reconstruction using multisurface fitting to take full advantage of spatial structure information and extends the method to a more general noise model.
Abstract: In this paper, we propose a new interpolation-based method of image super-resolution reconstruction. The idea is using multisurface fitting to take full advantage of spatial structure information. Each site of low-resolution pixels is fitted with one surface, and the final estimation is made by fusing the multisampling values on these surfaces in the maximum a posteriori fashion. With this method, the reconstructed high-resolution images preserve image details effectively without any hypothesis on image prior. Furthermore, we extend our method to a more general noise model. Experimental results on the simulated and real-world data show the superiority of the proposed method in both quantitative and visual comparisons.

Journal ArticleDOI
TL;DR: A probabilistic tracking method is proposed to detect blood vessels in retinal images with effective detection of retinal blood vessels with less false detection than Sun's and Chaudhuri's methods.

Journal ArticleDOI
TL;DR: A novel data reduction method which requires no inter-sensor collaboration and results in only a subset of the sensor measurements transmitted to the FC, and performs competitively with alternative methods, under different sensing conditions, while having lower computational complexity.
Abstract: Consider a wireless sensor network (WSN) with a fusion center (FC) deployed to estimate signal parameters from noisy sensor measurements. If the WSN has a large number of low-cost, battery-operated sensor nodes with limited transmission bandwidth, then conservation of transmission resources (power and bandwidth) is paramount. To this end, the present paper develops a novel data reduction method which requires no inter-sensor collaboration and results in only a subset of the sensor measurements transmitted to the FC. Using interval censoring as a data-reduction method, each sensor decides separately whether to censor its acquired measurements based on a rule that promotes censoring of measurements with least impact on the estimator mean-square error (MSE). Leveraging the statistical distribution of sensor data, the censoring mechanism and the received uncensored data, FC-based estimators are derived for both deterministic (via maximum likelihood estimation) and random parameters (via maximum a posteriori probability estimation) for a linear-Gaussian model. Quantization of the uncensored measurements at the sensor nodes offers an additional degree of freedom in the resource conservation versus estimator MSE reduction tradeoff. Cramer-Rao bound analysis for the different censor-estimators and censor-quantizer estimators is also provided to benchmark and facilitate MSE-based performance comparisons. Numerical simulations corroborate the analytical findings and demonstrate that the proposed censoring-estimation approach performs competitively with alternative methods, under different sensing conditions, while having lower computational complexity.

Journal ArticleDOI
17 Feb 2012-PLOS ONE
TL;DR: A graphical Bayesian model for SNP genotyping data is introduced that can infer genotypes even when the ploidy of the population is unknown and can be trivially adapted to use models that utilize prior information about any platform or species.
Abstract: The problem of genotyping polyploids is extremely important for the creation of genetic maps and assembly of complex plant genomes. Despite its significance, polyploid genotyping still remains largely unsolved and suffers from a lack of statistical formality. In this paper a graphical Bayesian model for SNP genotyping data is introduced. This model can infer genotypes even when the ploidy of the population is unknown. We also introduce an algorithm for finding the exact maximum a posteriori genotype configuration with this model. This algorithm is implemented in a freely available web-based software package SuperMASSA. We demonstrate the utility, efficiency, and flexibility of the model and algorithm by applying them to two different platforms, each of which is applied to a polyploid data set: Illumina GoldenGate data from potato and Sequenom MassARRAY data from sugarcane. Our method achieves state-of-the-art performance on both data sets and can be trivially adapted to use models that utilize prior information about any platform or species.

Book ChapterDOI
07 Oct 2012
TL;DR: A new model for the task of word recognition in natural images that simultaneously models visual and lexicon consistency of words in a single probabilistic model is proposed and outperforms state-of-the-art methods for cropped word recognition.
Abstract: This paper proposes a new model for the task of word recognition in natural images that simultaneously models visual and lexicon consistency of words in a single probabilistic model. Our approach combines local likelihood and pairwise positional consistency priors with higher order priors that enforce consistency of characters (lexicon) and their attributes (font and colour). Unlike traditional stage-based methods, word recognition in our framework is performed by estimating the maximum a posteriori (MAP) solution under the joint posterior distribution of the model. MAP inference in our model is performed through the use of weighted finite-state transducers (WFSTs). We show how the efficiency of certain operations on WFSTs can be utilized to find the most likely word under the model in an efficient manner. We evaluate our method on a range of challenging datasets (ICDAR'03, SVT, ICDAR'11). Experimental results demonstrate that our method outperforms state-of-the-art methods for cropped word recognition.

Proceedings ArticleDOI
16 Mar 2012
TL;DR: This work presents the Maximum a Posteriori HMM approach for forecasting stock values for the next day given historical data, and compares the performance to some of the existing methods using HMMs and Artificial Neural Networks using Mean Absolute Percentage Error (MAPE).
Abstract: Stock market prediction is a classic problem which has been analyzed extensively using tools and techniques of Machine Learning. Interesting properties which make this modeling non-trivial is the time dependence, volatility and other similar complex dependencies of this problem. To incorporate these, Hidden Markov Models (HMM's) have recently been applied to forecast and predict the stock market. We present the Maximum a Posteriori HMM approach for forecasting stock values for the next day given historical data. In our approach, we consider the fractional change in Stock value and the intra-day high and low values of the stock to train the continuous HMM. This HMM is then used to make a Maximum a Posteriori decision over all the possible stock values for the next day. We test our approach on several stocks, and compare the performance to some of the existing methods using HMMs and Artificial Neural Networks using Mean Absolute Percentage Error (MAPE).

Journal ArticleDOI
TL;DR: The problem of sampling from the posterior density is solved using a Markov chain Monte Carlo (MCMC) method, which has previously been used for variational regularization of Tikhonov type methods.
Abstract: The connection between Bayesian statistics and the technique of regularization for inverse problems has been given significant attention in recent years. For example, Bayes' law is frequently used as motivation for variational regularization methods of Tikhonov type. In this setting, the regularization function corresponds to the negative-log of the prior probability density; the fit-to-data function corresponds to the negative-log of the likelihood; and the regularized solution corresponds to the maximizer of the posterior density function, known as the maximum a posteriori (MAP) estimator of the unknown, which in our case is an image. Much of the work in this direction has focused on the development of techniques for efficient computation of MAP estimators (or regularized solutions). Less explored in the inverse problems community, and of interest to us in this paper, is the problem of sampling from the posterior density. To do this, we use a Markov chain Monte Carlo (MCMC) method, which has previously ...

Journal ArticleDOI
TL;DR: This paper considers the family of total Bregman divergences (tBDs) as an efficient and robust “distance” measure to quantify the dissimilarity between shapes, and proves that for any tBD, there exists a distribution which belongs to the lifted exponential family (lEF) of statistical distributions.
Abstract: In this paper, we consider the family of total Bregman divergences (tBDs) as an efficient and robust “distance” measure to quantify the dissimilarity between shapes. We use the tBD-based l1-norm center as the representative of a set of shapes, and call it the t-center. First, we briefly present and analyze the properties of the tBDs and t-centers following our previous work in [1]. Then, we prove that for any tBD, there exists a distribution which belongs to the lifted exponential family (lEF) of statistical distributions. Further, we show that finding the maximum a posteriori (MAP) estimate of the parameters of the lifted exponential family distribution is equivalent to minimizing the tBD to find the t-centers. This leads to a new clustering technique, namely, the total Bregman soft clustering algorithm. We evaluate the tBD, t-center, and the soft clustering algorithm on shape retrieval applications. Our shape retrieval framework is composed of three steps: 1) extraction of the shape boundary points, 2) affine alignment of the shapes and use of a Gaussian mixture model (GMM) [2], [3], [4] to represent the aligned boundaries, and 3) comparison of the GMMs using tBD to find the best matches given a query shape. To further speed up the shape retrieval algorithm, we perform hierarchical clustering of the shapes using our total Bregman soft clustering algorithm. This enables us to compare the query with a small subset of shapes which are chosen to be the cluster t-centers. We evaluate our method on various public domain 2D and 3D databases, and demonstrate comparable or better results than state-of-the-art retrieval techniques.

Journal ArticleDOI
TL;DR: In experiments on several real-world data sets, it is shown that exploiting a silhouette coherency criterion in a multiview setting allows for dramatic improvements of silhouette quality over independent 2D segmentations without any significant increase of computational efforts.
Abstract: We propose a probabilistic formulation of joint silhouette extraction and 3D reconstruction given a series of calibrated 2D images. Instead of segmenting each image separately in order to construct a 3D surface consistent with the estimated silhouettes, we compute the most probable 3D shape that gives rise to the observed color information. The probabilistic framework, based on Bayesian inference, enables robust 3D reconstruction by optimally taking into account the contribution of all views. We solve the arising maximum a posteriori shape inference in a globally optimal manner by convex relaxation techniques in a spatially continuous representation. For an interactively provided user input in the form of scribbles specifying foreground and background regions, we build corresponding color distributions as multivariate Gaussians and find a volume occupancy that best fits to this data in a variational sense. Compared to classical methods for silhouette-based multiview reconstruction, the proposed approach does not depend on initialization and enjoys significant resilience to violations of the model assumptions due to background clutter, specular reflections, and camera sensor perturbations. In experiments on several real-world data sets, we show that exploiting a silhouette coherency criterion in a multiview setting allows for dramatic improvements of silhouette quality over independent 2D segmentations without any significant increase of computational efforts. This results in more accurate visual hull estimation, needed by a multitude of image-based modeling approaches. We made use of recent advances in parallel computing with a GPU implementation of the proposed method generating reconstructions on volume grids of more than 20 million voxels in up to 4.41 seconds.

Journal ArticleDOI
TL;DR: It is proved that many distributions revolving around maximum a posteriori (MAP) interpretation of sparse regularized estimators are in fact incompressible, in the limit of large problem sizes.
Abstract: We develop a principled way of identifying probability distributions whose independent and identically distributed realizations are compressible, i.e., can be well approximated as sparse. We focus on Gaussian compressed sensing, an example of underdetermined linear regression, where compressibility is known to ensure the success of estimators exploiting sparse regularization. We prove that many distributions revolving around maximum a posteriori (MAP) interpretation of sparse regularized estimators are in fact incompressible, in the limit of large problem sizes. We especially highlight the Laplace distribution and ^1 regularized estimators such as the Lasso and basis pursuit denoising. We rigorously disprove the myth that the success of ^1 minimization for compressed sensing image reconstruction is a simple corollary of a Laplace model of images combined with Bayesian MAP estimation, and show that in fact quite the reverse is true. To establish this result, we identify nontrivial undersampling regions where the simple least-squares solution almost surely outperforms an oracle sparse solution, when the data are generated from the Laplace distribution. We also provide simple rules of thumb to characterize classes of compressible and incompressible distributions based on their second and fourth moments. Generalized Gaussian and generalized Pareto distributions serve as running examples.

Journal ArticleDOI
TL;DR: An analytical expression for the mean square error on location estimates for incorrect PLE assumption is derived and the effects of error in the PLE on the location accuracy are examined and a maximum a posteriori (MAP) estimator is proposed by considering the PSE as an unknown random variable.
Abstract: Due to its straightforward implementation, the received signal strength (RSS) has been an advantageous approach for low cost localization systems. Although the propagation model is difficult to characterize in uncertain environments, the majority of current studies assume to have exact knowledge of the path-loss exponent (PLE). This letter deals with RSS based localization in an unknown path-loss model. First, we derive an analytical expression for the mean square error on location estimates for incorrect PLE assumption and examine, via simulation, the effects of error in the PLE on the location accuracy. Second, we enhance a previously proposed RSS-PLE joint estimator (JE) by reducing its complexity. We also propose a maximum a posteriori (MAP) estimator by considering the PLE as an unknown random variable. Finally, we derive the Hybrid Cramer Rao Bound (HCRB) as a benchmark for the MAP estimator. Error analysis results predict large error due to incorrect PLE assumption which are in agreement with the simulation results. Further simulations show that the MAP estimator exhibits better performance at low signal to noise ratio (SNR) and that the relation between the HCRB and CRB depends on the network geometry.

Book ChapterDOI
07 Oct 2012
TL;DR: A probabilistic formulation is introduced that seamlessly incorporates such constraints as priors to arrive at the maximum a posteriori estimates of reflectance and natural illumination.
Abstract: Estimating reflectance and natural illumination from a single image of an object of known shape is a challenging task due to the ambiguities between reflectance and illumination. Although there is an inherent limitation in what can be recovered as the reflectance band-limits the illumination, explicitly estimating both is desirable for many computer vision applications. Achieving this estimation requires that we derive and impose strong constraints on both variables. We introduce a probabilistic formulation that seamlessly incorporates such constraints as priors to arrive at the maximum a posteriori estimates of reflectance and natural illumination. We begin by showing that reflectance modulates the natural illumination in a way that increases its entropy. Based on this observation, we impose a prior on the illumination that favors lower entropy while conforming to natural image statistics. We also impose a prior on the reflectance based on the directional statistics BRDF model that constrains the estimate to lie within the bounds and variability of real-world materials. Experimental results on a number of synthetic and real images show that the method is able to achieve accurate joint estimation for different combinations of materials and lighting.

Journal ArticleDOI
TL;DR: This paper proposes to learn a dictionary from the logarithmic transformed image, and then to use it in a variational model built for noise removal, suggesting that in terms of visual quality, peak signal-to-noise ratio, and mean absolute deviation error, the proposed algorithm outperforms state-of-the-art methods.
Abstract: Multiplicative noise removal is a challenging image processing problem, and most existing methods are based on the maximum a posteriori formulation and the logarithmic transformation of multiplicative denoising problems into additive denoising problems. Sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, in this paper, we propose to learn a dictionary from the logarithmic transformed image, and then to use it in a variational model built for noise removal. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio, and mean absolute deviation error, the proposed algorithm outperforms state-of-the-art methods.

Journal ArticleDOI
TL;DR: This paper exploits a Boltzmann machine, allowing to take a large variety of structures into account, and resorts to a mean-field approximation and the “variational Bayes expectation-maximization” algorithm to solve a marginalized maximum a posteriori problem.
Abstract: Taking advantage of the structures inherent in many sparse decompositions constitutes a promising research axis. In this paper, we address this problem from a Bayesian point of view. We exploit a Boltzmann machine, allowing to take a large variety of structures into account, and focus on the resolution of a marginalized maximum a posteriori problem. To solve this problem, we resort to a mean-field approximation and the “variational Bayes expectation-maximization” algorithm. This approach results in a soft procedure making no hard decision on the support or the values of the sparse representation. We show that this characteristic leads to an improvement of the performance over state-of-the-art algorithms.