scispace - formally typeset
Search or ask a question

Showing papers by "Rafael Molina published in 2012"


Journal ArticleDOI
TL;DR: In this paper, a matrix factorization formulation and enforcing the low-rank constraint in the estimates as a sparsity constraint are used to determine the correct rank while providing high recovery performance.
Abstract: Recovery of low-rank matrices has recently seen significant activity in many areas of science and engineering, motivated by recent theoretical results for exact reconstruction guarantees and interesting practical applications. In this paper, we present novel recovery algorithms for estimating low-rank matrices in matrix completion and robust principal component analysis based on sparse Bayesian learning (SBL) principles. Starting from a matrix factorization formulation and enforcing the low-rank constraint in the estimates as a sparsity constraint, we develop an approach that is very effective in determining the correct rank while providing high recovery performance. We provide connections with existing methods in other similar problems and empirical results and comparisons with current state-of-the-art methods that illustrate the effectiveness of this approach.

249 citations


Book ChapterDOI
07 Oct 2012
TL;DR: A general method for blind image deconvolution using Bayesian inference with super-Gaussian sparse image priors is presented and theoretical and experimental results demonstrate that the proposed formulation is very effective, efficient, and flexible.
Abstract: We present a general method for blind image deconvolution using Bayesian inference with super-Gaussian sparse image priors. We consider a large family of priors suitable for modeling natural images, and develop the general procedure for estimating the unknown image and the blur. Our formulation includes a number of existing modeling and inference methods as special cases while providing additional flexibility in image modeling and algorithm design. We also present an analysis of the proposed inference compared to other methods and discuss its advantages. Theoretical and experimental results demonstrate that the proposed formulation is very effective, efficient, and flexible.

191 citations


Journal ArticleDOI
TL;DR: The proposed acquisition and recovery method provides light field images with high spatial resolution and signal-to-noise-ratio, and therefore is not affected by limitations common to existing light field camera designs.
Abstract: We propose a novel design for light field image acquisition based on compressive sensing principles By placing a randomly coded mask at the aperture of a camera, incoherent measurements of the light passing through different parts of the lens are encoded in the captured images Each captured image is a random linear combination of different angular views of a scene The encoded images are then used to recover the original light field image via a novel Bayesian reconstruction algorithm Using the principles of compressive sensing, we show that light field images with a large number of angular views can be recovered from only a few acquisitions Moreover, the proposed acquisition and recovery method provides light field images with high spatial resolution and signal-to-noise-ratio, and therefore is not affected by limitations common to existing light field camera designs We present a prototype camera design based on the proposed framework by modifying a regular digital camera Finally, we demonstrate the effectiveness of the proposed system using experimental results with both synthetic and real images

106 citations


Proceedings ArticleDOI
18 Oct 2012
TL;DR: The proposed method exploits the correlation between the objects present in the color and depth map images via joint segmentation, which is then used to increase the resolution and remove noise via estimating conditional modes.
Abstract: The recent development of low-cost and fast time-of-flight cameras enabled measuring depth information at video frame rates. Although these cameras provide invaluable information for many 3D applications, their imaging capabilities are very limited both in terms of resolution and noise level. In this paper, we present a novel method for obtaining a high resolution depth map from a pair of a low resolution depth map and a corresponding high resolution color image. The proposed method exploits the correlation between the objects present in the color and depth map images via joint segmentation, which is then used to increase the resolution and remove noise via estimating conditional modes. Regions with inconsistent color and depth information are detected and corrected with our algorithm for increased robustness. Experimental results in terms of image quality and running times demonstrate the high performance of the method.

81 citations


Proceedings ArticleDOI
18 Oct 2012
TL;DR: This paper presents a novel algorithm for simultaneous image reconstruction, restoration and parameter estimation using a hierarchical Bayesian modeling followed by an Expectation-Minimization approach to estimate the unknown image, blur and hyper-parameters of the global distribution.
Abstract: The idea of compressive sensing in imaging refers to the reconstruction of an unknown image through a small number of incoherent measurements. Blind deconvolution is the recovery of a sharp version of a blurred image when the blur kernel is unknown. In this paper, we combine these two problems trying to estimate the unknown sharp image and blur kernel solely through the compressive sensing measurements of a blurred image. We present a novel algorithm for simultaneous image reconstruction, restoration and parameter estimation. Using a hierarchical Bayesian modeling followed by an Expectation-Minimization approach we estimate the unknown image, blur and hyperparameters of the global distribution. Experimental results on simulated blurred images support the effectiveness of our method. Moreover, real passive millimeter-wave images are used for evaluating the proposed method as well as strengthening its practical aspects.

16 citations


Journal ArticleDOI
TL;DR: In this article, a variational posterior distribution approximation on each posterior will produce as many posterior approximations as priors we want to combine, and a unique approximation is obtained by finding the distribution on the unknown image given the observations that minimizes a linear convex combination of the Kullback-Leibler divergences associated with each posterior distribution.

13 citations


Journal ArticleDOI
TL;DR: The results obtained validate the previous study and are improved with the addition of ADA, CRP and %PN, which could be useful for the diagnostic assessment of patients with serous effusions.
Abstract: The utility of tumour markers (TM) in the differential diagnosis of cancer in serous effusion (fluid effusion (FE)) has been the subject of controversy. The aim of this study was to prospectively validate our previous study and to assess whether the addition of adenosine deaminase (ADA), C-reactive protein (CRP) or percentage of polymorphonuclear cells (%PN) allows the identification of false positives. In this study, carcinoembryonic antigen, cancer antigen 15-3, cancer antigen 19-9, ADA, CRP and %PN in FE were determined in 347 patients with 391 effusions. Effusions were considered as malignant effusion when at least one TM in serum exceeded the cutoff and the ratio FE/S was higher than 1.2. Also, cases with values of ADA, CRP and %PN above the established cutoffs in serous effusion were considered as potential false positives. The combined sensitivity and specificity of the three TM was 76.2 % (95 % confidence intervals (CI) 67.8–83.3 %) and 97.0 % (95 % CI 94.1–98.7), respectively. Subanalysis of the 318 cases with previous criteria and negative ADA, CRP and %PN obtained sensitivities of 78.4 % (95 % CI 69.4–85.6) and a specificity of 100 % (95 % CI 98.2–100). The results obtained validate our previous study and are improved with the addition of ADA, CRP and %PN. TM in serous effusions and serum could be useful for the diagnostic assessment of patients with serous effusions.

12 citations


Proceedings ArticleDOI
01 Sep 2012
TL;DR: A novel blind image deconvolution (BID) regularization framework for compressive passive millimeter-wave (PMMW) imaging systems based on the variable-splitting optimization technique, which allows for existing compressive sensing reconstruction algorithms in compressive BID problems to be utilized.
Abstract: We propose a novel blind image deconvolution (BID) regularization framework for compressive passive millimeter-wave (PMMW) imaging systems. The proposed framework is based on the variable-splitting optimization technique, which allows us to utilize existing compressive sensing reconstruction algorithms in compressive BID problems. In addition, a non-convex l p quasi-norm with 0 < p < 1 is employed as a regularization term for the image, while a simultaneous auto-regressive (SAR) regularization term is utilized for the blur. Furthermore, the proposed framework is very general and it can be easily adapted to other state-of-the-art BID approaches that utilize different image/blur regularization terms. Experimental results, obtained with simulations using a synthetic image and real PMMW images, show the advantage of the proposed approach compared to existing ones.

8 citations


Journal ArticleDOI
TL;DR: By performing a Variational Bayesian Inference procedure with a generalized Gaussian prior, this work comes out with an algorithm that performs simultaneously the estimation of both sources and model parameters, and allows to control the smoothness degree of the estimated sources.
Abstract: Although in the last decades the use of Magnetic Resonance Imaging has grown in popularity as a tool for the structural analysis of the brain, including MRI, fMRI and recently DTI, the ElectroEncephaloGraphy (EEG) is, still today, an interesting technique for the understanding of brain organization and function. The main reason for this is that the EEG is a direct measure of brain bioelectrical activity, and such activity can be monitorized in the millisecond time window. For some situations and cognitive scenarios, such fine temporal resolution might suffice for some aspects of brain function; however, the EEG spatial resolution is very poor since it is based on a small number of scalp recordings, thus turning the source localization problem into an ill-posed one in which infinite possibilities exist for the localization of the neuronal generators. This is an old problem in computational neuroimaging; indeed, many methods have been proposed to overcome this localization. Here, by performing a Variational Bayesian Inference procedure with a generalized Gaussian prior, we come out with an algorithm that performs simultaneously the estimation of both sources and model parameters. The novelty for the inclusion of the generalized Gaussian prior allows to control the smoothness degree of the estimated sources. Finally, the suggested algorithm is validated on simulated data.

7 citations


Proceedings Article
18 Oct 2012
TL;DR: A new Bayesian method is proposed for the non-invasive localization of EEG sources that incorporates a Total Variation (TV) prior which has been used before in image processing for edge detection and applies variational methods to approximate the probability distributions to estimate the unknown parameters and the sources.
Abstract: In this work we propose a new Bayesian method for the non-invasive localization of EEG sources. For this problem, most of the existing methods assume that the sources are distributed throughout the brain volume according to smooth 3D patterns. However, this assumption might fail in pathological conditions, such as in an epileptic brain, where it can occur that the neurophysiological generators are localized in a narrow region, highly compacted, what originates abrupt profiles of electrical activity. This new method incorporates a Total Variation (TV) prior which has been used before in image processing for edge detection and applies variational methods to approximate the probability distributions to estimate the unknown parameters and the sources. The procedure is tested and validated on synthetic EEG data.