scispace - formally typeset
Search or ask a question

Showing papers by "Rafael Molina published in 2014"


Journal ArticleDOI
TL;DR: This paper exploits the Bayesian modeling and inference paradigm to tackle the problem of kernel-based remote sensing image classification and proposes an incremental/active learning approach based on three different approaches: the maximum differential of entropies; the minimum distance to decision boundary; and the minimum normalized distance.
Abstract: In recent years, kernel methods, in particular support vector machines (SVMs), have been successfully introduced to remote sensing image classification. Their properties make them appropriate for dealing with a high number of image features and a low number of available labeled spectra. The introduction of alternative approaches based on (parametric) Bayesian inference has been quite scarce in the more recent years. Assuming a particular prior data distribution may lead to poor results in remote sensing problems because of the specificities and complexity of the data. In this context, the emerging field of nonparametric Bayesian methods constitutes a proper theoretical framework to tackle the remote sensing image classification problem. This paper exploits the Bayesian modeling and inference paradigm to tackle the problem of kernel-based remote sensing image classification. This Bayesian methodology is appropriate for both finite- and infinite-dimensional feature spaces. The particular problem of active learning is addressed by proposing an incremental/active learning approach based on three different approaches: 1) the maximum differential of entropies; 2) the minimum distance to decision boundary; and 3) the minimum normalized distance. Parameters are estimated by using the evidence Bayesian approach, the kernel trick, and the marginal distribution of the observations instead of the posterior distribution of the adaptive parameters. This approach allows us to deal with infinite-dimensional feature spaces. The proposed approach is tested on the challenging problem of urban monitoring from multispectral and synthetic aperture radar data and in multiclass land cover classification of hyperspectral images, in both purely supervised and active learning settings. Similar results are obtained when compared to SVMs in the supervised mode, with the advantage of providing posterior estimates for classification and automatic parameter learning. Comparison with random sampling as well as standard active learning methods such as margin sampling and entropy-query-by-bagging reveals a systematic overall accuracy gain and faster convergence with the number of queries.

48 citations


Journal ArticleDOI
TL;DR: %p2PSA and phi raise the accuracy in the detection of prostate cancer, reducing the number of unnecessary biopsies and improving the prediction of the aggressiveness of the tumor.
Abstract: BACKGROUND %p2PSA and prostate health index (phi) has shown valuable results in the detection of prostate cancer (PCa), improving the prediction of the aggressiveness of the tumor. The goal of the present study was to evaluate %p2PSA and phi in the detection of PCa, estimating their relationship with the aggressiveness of PCa. METHODS A total of 354 patients with positive or negative prostatic biopsy were included. Prospectively, 150 were enrolled and 204 were enrolled retrospectively proceeding from our serum bank. RESULTS The best performance was observed for %p2PSA and phi, obtaining an AUC of 0.723 and 0.732, respectively. The highest specificity at sensitivity around 90% was obtained for phi (27.4%). Using the cut-off of 31.94 for phi, a reduction of 19% biopsies could be obtained, while 17 PCa would have been missed, including only four patients with a Gleason score ≥7. Similarly, using a cut-off of 1.21 for %p2PSA, a reduction of 12.7% biopsies could be obtained, while 16 PCa would have been missed, including only four patients with a Gleason score ≥7. Moreover, among patients with PCa, phi (median: 69.75 vs. 48.04) and %p2PSA (median: 2.60 vs. 1.98) values are significantly higher (p<0.0001) in patients with a biopsy Gleason score ≥7. CONCLUSIONS Our results confirm previous evaluations, showing similar AUCs and results in sensitivity and specificity to other studies.%p2PSA and phi raise the accuracy in the detection of prostate cancer, reducing the number of unnecessary biopsies and improving the prediction of the aggressiveness of the tumor.

36 citations


Journal ArticleDOI
TL;DR: This tutorial presents an introduction to Variational Bayesian methods in the context of probabilistic graphical models, and shows the connections between VB and other posterior approximation methods such as the marginalization-based Loopy Belief Propagation and the Expectation Propagations algorithms.
Abstract: In this paper we present an introduction to Variational Bayesian (VB) methods in the context of probabilistic graphical models, and discuss their application in multimedia related problems. VB is a family of deterministic probability distribution approximation procedures that offer distinct advantages over alternative approaches based on stochastic sampling and those providing only point estimates. VB inference is flexible to be applied in different practical problems, yet is broad enough to subsume as its special cases several alternative inference approaches including Maximum A Posteriori (MAP) and the Expectation-Maximization (EM) algorithm. In this paper we also show the connections between VB and other posterior approximation methods such as the marginalization-based Loopy Belief Propagation (LBP) and the Expectation Propagation (EP) algorithms. Specifically, both VB and EP are variational methods that minimize functionals based on the Kullback-Leibler (KL) divergence. LBP, traditionally developed using graphical models, can also be viewed as a VB inference procedure. We present several multimedia related applications illustrating the use and effectiveness of the VB algorithms discussed herein. We hope that by reading this tutorial the readers will obtain a general understanding of Bayesian methods and establish connections among popular algorithms used in practice.

35 citations


Proceedings ArticleDOI
28 Jan 2014
TL;DR: This paper derives the IRLS method from the perspective of majorization minimization and proposes an Alternating Direction Method of Multipliers (ADMM) to solve the reweighted linear equations, which has a shrinkage operator that pushes each component to zero in a multiplicative fashion.
Abstract: Iteratively reweighted least squares (IRLS) is one of the most effective methods to minimize the l p regularized linear inverse problem. Unfortunately, the regularizer is nonsmooth and nonconvex when 0 < p < 1. In spite of its properties and mainly due to its high computation cost, IRLS is not widely used in image deconvolution and reconstruction. In this paper, we first derive the IRLS method from the perspective of majorization minimization and then propose an Alternating Direction Method of Multipliers (ADMM) to solve the reweighted linear equations. Interestingly, the resulting algorithm has a shrinkage operator that pushes each component to zero in a multiplicative fashion. Experimental results on both image deconvolution and reconstruction demonstrate that the proposed method outperforms state-of-the-art algorithms in terms of speed and recovery quality.

29 citations


Journal ArticleDOI
TL;DR: Prostate health index (phi), a measure calculated as p2PSA/fPSA × √tPSA, has shown valuable results in the detection of prostate cancer (PCa), improving the prediction of the aggressiveness of the tumor.

28 citations


Journal ArticleDOI
TL;DR: A new Bayesian Super-Resolution image registration and reconstruction method that utilizes a prior distribution based on a general combination of spatially adaptive, or non-stationary, image filters that includes an adaptive local strength parameter able to preserve both image edges and textures is proposed.

28 citations


Journal ArticleDOI
TL;DR: This work addresses the problem of analyzing and understanding dynamic video scenes by proposing a two-level motion pattern mining approach, where moving speed is considered to describe visual word and traffic states are detected and assigned to every video frame.
Abstract: Our work addresses the problem of analyzing and understanding dynamic video scenes. A two-level motion pattern mining approach is proposed. At the first level, activities are modeled as distributions over patch-based features, including spatial location, moving direction, and speed. At the second level, traffic states are modeled as distributions over activities. Both patterns are shared among video clips. Compared to other works, one advantage of our method is that moving speed is considered to describe visual word. The other advantage is that traffic states are detected and assigned to every video frame. These enable finer semantic interpretation, more precise video segmentation, and anomaly detection. Specifically, every video frame is labeled by a certain traffic state, and the video is segmented frame by frame accordingly. Moving pixels in each frame, which do not belong to any activity or cannot exist in the corresponding traffic state, are detected as anomalies. We have successfully tested our approach on some challenging traffic surveillance sequences containing both pedestrian and vehicle motions.

15 citations


Journal ArticleDOI
TL;DR: A method which uses causality to obtain a measure of effective connectivity from fMRI data using a vector autoregressive model for the latent variables describing neuronal activity in combination with a linear observation model based on a convolution with a hemodynamic response function.
Abstract: The ability to accurately estimate effective connectivity among brain regions from neuroimaging data could help answering many open questions in neuroscience. We propose a method which uses causality to obtain a measure of effective connectivity from fMRI data. The method uses a vector autoregressive model for the latent variables describing neuronal activity in combination with a linear observation model based on a convolution with a hemodynamic response function. Due to the employed modeling, it is possible to efficiently estimate all latent variables of the model using a variational Bayesian inference algorithm. The computational efficiency of the method enables us to apply it to large scale problems with high sampling rates and several hundred regions of interest. We use a comprehensive empirical evaluation with synthetic and real fMRI data to evaluate the performance of our method under various conditions.

14 citations


Proceedings ArticleDOI
13 Nov 2014
TL;DR: This work shows how the introduction in the SG distribution of a global strength (not necessary scale) parameter can be used to improve the quality of the obtained restorations as well as to introduce additional information on the global weight of the prior.
Abstract: Super Gaussian (SG) distributions have proven to be very powerful prior models to induce sparsity in Bayesian Blind Deconvolution (BD) problems. Their conjugate based representations make them specially attractive when Variational Bayes (VB) inference is used since their variational parameters can be calculated in closed form with the sole knowledge of the energy function of the prior model. In this work we show how the introduction in the SG distribution of a global strength (not necessary scale) parameter can be used to improve the quality of the obtained restorations as well as to introduce additional information on the global weight of the prior. A model to estimate the new unknown parameter within the Bayesian framework is provided. Experimental results, on both synthetic and real images, demonstrate the effectiveness of the proposed approach.

9 citations


Journal ArticleDOI
TL;DR: A novel Bayesian image restoration method based on a combination of priors that combines TV and PSI models, which preserves the image textures and is competitive with state-of-the-art restoration methods.

9 citations


Proceedings Article
13 Nov 2014
TL;DR: The proposed Bayesian framework is applied to Image Segmentation problems on both synthetic and real datasets, showing higher accuracy than state-of-the-art approaches.
Abstract: In this paper we utilize Bayesian modeling and inference to learn a softmax classification model which performs Supervised Classification and Active Learning. For p <; 1, lp-priors are used to impose sparsity on the adaptive parameters. Using variational inference, all model parameters are estimated and the posterior probabilities of the classes given the samples are calculated. A relationship between the prior model used and the independent Gaussian prior model is provided. The posterior probabilities are used to classify new samples and to define two Active Learning methods to improve classifier performance: Minimum Probability and Maximum Entropy. In the experimental section the proposed Bayesian framework is applied to Image Segmentation problems on both synthetic and real datasets, showing higher accuracy than state-of-the-art approaches.

Journal ArticleDOI
TL;DR: A Bayesian based algorithm to recover sparse signals from compressed noisy measurements in the presence of a smooth background component is proposed and its advantage over the current state-of-the-art solutions is demonstrated.
Abstract: We propose a Bayesian based algorithm to recover sparse signals from compressed noisy measurements in the presence of a smooth background component. This problem is closely related to robust principal component analysis and compressive sensing, and is found in a number of practical areas. The proposed algorithm adopts a hierarchical Bayesian framework for modeling, and employs approximate inference to estimate the unknowns. Numerical examples demonstrate the effectiveness of the proposed algorithm and its advantage over the current state-of-the-art solutions.

Proceedings Article
13 Nov 2014
TL;DR: This paper proposes two algorithmic solutions that exploit the signal temporal properties to improve the reconstruction accuracy and the effectiveness of the proposed algorithms is corroborated with experimental results.
Abstract: In this paper we consider the problem of recovering temporally smooth or correlated sparse signals from a set of undersampled measurements. We propose two algorithmic solutions that exploit the signal temporal properties to improve the reconstruction accuracy. The effectiveness of the proposed algorithms is corroborated with experimental results.

Proceedings ArticleDOI
01 Oct 2014
TL;DR: This work utilizes Bayesian modeling and inference to jointly learn a classifier and estimate an optimal filterbank, and shows that the proposed method compares favorably with other classification/filtering approaches, without the need of parameter tuning.
Abstract: Many real classification tasks are oriented to sequence (neighbor) labeling, that is, assigning a label to every sample of a signal while taking into account the sequentiality (or neighborhood) of the samples. This is normally approached by first filtering the data and then performing classification. In consequence, both processes are optimized separately, with no guarantee of global optimality. In this work we utilize Bayesian modeling and inference to jointly learn a classifier and estimate an optimal filterbank. Variational Bayesian inference is used to approximate the posterior distributions of all unknowns, resulting in an iterative procedure to estimate the classifier parameters and the filterbank coefficients. In the experimental section we show, using synthetic and real data, that the proposed method compares favorably with other classification/filtering approaches, without the need of parameter tuning.