scispace - formally typeset
Search or ask a question

Showing papers by "Aggelos K. Katsaggelos published in 2014"


Journal ArticleDOI
TL;DR: Motivated by the application of incoherent tight frames in compressed sensing (CS), a methodology to construct incoherent UNTFs is proposed, which improves CS signal recovery, increasing the reconstruction accuracy.
Abstract: Despite the important properties of unit norm tight frames (UNTFs) and equiangular tight frames (ETFs), their construction has been proven extremely difficult. The few known techniques produce only a small number of such frames while imposing certain restrictions on frame dimensions. Motivated by the application of incoherent tight frames in compressed sensing (CS), we propose a methodology to construct incoherent UNTFs. When frame redundancy is not very high, the achieved maximal column correlation becomes close to the lowest possible bound. The proposed methodology may construct frames of any dimensions. The obtained frames are employed in CS to produce optimized projection matrices. Experimental results show that the proposed optimization technique improves CS signal recovery, increasing the reconstruction accuracy. Considering that the UNTFs and ETFs are important in sparse representations, channel coding, and communications, we expect that the proposed construction will be useful in other applications, besides the CS.

73 citations


Journal ArticleDOI
TL;DR: This paper exploits the Bayesian modeling and inference paradigm to tackle the problem of kernel-based remote sensing image classification and proposes an incremental/active learning approach based on three different approaches: the maximum differential of entropies; the minimum distance to decision boundary; and the minimum normalized distance.
Abstract: In recent years, kernel methods, in particular support vector machines (SVMs), have been successfully introduced to remote sensing image classification. Their properties make them appropriate for dealing with a high number of image features and a low number of available labeled spectra. The introduction of alternative approaches based on (parametric) Bayesian inference has been quite scarce in the more recent years. Assuming a particular prior data distribution may lead to poor results in remote sensing problems because of the specificities and complexity of the data. In this context, the emerging field of nonparametric Bayesian methods constitutes a proper theoretical framework to tackle the remote sensing image classification problem. This paper exploits the Bayesian modeling and inference paradigm to tackle the problem of kernel-based remote sensing image classification. This Bayesian methodology is appropriate for both finite- and infinite-dimensional feature spaces. The particular problem of active learning is addressed by proposing an incremental/active learning approach based on three different approaches: 1) the maximum differential of entropies; 2) the minimum distance to decision boundary; and 3) the minimum normalized distance. Parameters are estimated by using the evidence Bayesian approach, the kernel trick, and the marginal distribution of the observations instead of the posterior distribution of the adaptive parameters. This approach allows us to deal with infinite-dimensional feature spaces. The proposed approach is tested on the challenging problem of urban monitoring from multispectral and synthetic aperture radar data and in multiclass land cover classification of hyperspectral images, in both purely supervised and active learning settings. Similar results are obtained when compared to SVMs in the supervised mode, with the advantage of providing posterior estimates for classification and automatic parameter learning. Comparison with random sampling as well as standard active learning methods such as margin sampling and entropy-query-by-bagging reveals a systematic overall accuracy gain and faster convergence with the number of queries.

48 citations


Journal ArticleDOI
TL;DR: This tutorial presents an introduction to Variational Bayesian methods in the context of probabilistic graphical models, and shows the connections between VB and other posterior approximation methods such as the marginalization-based Loopy Belief Propagation and the Expectation Propagations algorithms.
Abstract: In this paper we present an introduction to Variational Bayesian (VB) methods in the context of probabilistic graphical models, and discuss their application in multimedia related problems. VB is a family of deterministic probability distribution approximation procedures that offer distinct advantages over alternative approaches based on stochastic sampling and those providing only point estimates. VB inference is flexible to be applied in different practical problems, yet is broad enough to subsume as its special cases several alternative inference approaches including Maximum A Posteriori (MAP) and the Expectation-Maximization (EM) algorithm. In this paper we also show the connections between VB and other posterior approximation methods such as the marginalization-based Loopy Belief Propagation (LBP) and the Expectation Propagation (EP) algorithms. Specifically, both VB and EP are variational methods that minimize functionals based on the Kullback-Leibler (KL) divergence. LBP, traditionally developed using graphical models, can also be viewed as a VB inference procedure. We present several multimedia related applications illustrating the use and effectiveness of the VB algorithms discussed herein. We hope that by reading this tutorial the readers will obtain a general understanding of Bayesian methods and establish connections among popular algorithms used in practice.

35 citations


Journal ArticleDOI
TL;DR: This work develops a completely automatic, very fast, online algorithm that demonstrates how a consumer-grade depth camera can be calibrated with a color camera with minimal user interaction.

30 citations


Proceedings ArticleDOI
28 Jan 2014
TL;DR: This paper derives the IRLS method from the perspective of majorization minimization and proposes an Alternating Direction Method of Multipliers (ADMM) to solve the reweighted linear equations, which has a shrinkage operator that pushes each component to zero in a multiplicative fashion.
Abstract: Iteratively reweighted least squares (IRLS) is one of the most effective methods to minimize the l p regularized linear inverse problem. Unfortunately, the regularizer is nonsmooth and nonconvex when 0 < p < 1. In spite of its properties and mainly due to its high computation cost, IRLS is not widely used in image deconvolution and reconstruction. In this paper, we first derive the IRLS method from the perspective of majorization minimization and then propose an Alternating Direction Method of Multipliers (ADMM) to solve the reweighted linear equations. Interestingly, the resulting algorithm has a shrinkage operator that pushes each component to zero in a multiplicative fashion. Experimental results on both image deconvolution and reconstruction demonstrate that the proposed method outperforms state-of-the-art algorithms in terms of speed and recovery quality.

29 citations


Journal ArticleDOI
TL;DR: A new Bayesian Super-Resolution image registration and reconstruction method that utilizes a prior distribution based on a general combination of spatially adaptive, or non-stationary, image filters that includes an adaptive local strength parameter able to preserve both image edges and textures is proposed.

28 citations


Journal ArticleDOI
TL;DR: This work addresses the problem of analyzing and understanding dynamic video scenes by proposing a two-level motion pattern mining approach, where moving speed is considered to describe visual word and traffic states are detected and assigned to every video frame.
Abstract: Our work addresses the problem of analyzing and understanding dynamic video scenes. A two-level motion pattern mining approach is proposed. At the first level, activities are modeled as distributions over patch-based features, including spatial location, moving direction, and speed. At the second level, traffic states are modeled as distributions over activities. Both patterns are shared among video clips. Compared to other works, one advantage of our method is that moving speed is considered to describe visual word. The other advantage is that traffic states are detected and assigned to every video frame. These enable finer semantic interpretation, more precise video segmentation, and anomaly detection. Specifically, every video frame is labeled by a certain traffic state, and the video is segmented frame by frame accordingly. Moving pixels in each frame, which do not belong to any activity or cannot exist in the corresponding traffic state, are detected as anomalies. We have successfully tested our approach on some challenging traffic surveillance sequences containing both pedestrian and vehicle motions.

15 citations


Journal ArticleDOI
TL;DR: A method which uses causality to obtain a measure of effective connectivity from fMRI data using a vector autoregressive model for the latent variables describing neuronal activity in combination with a linear observation model based on a convolution with a hemodynamic response function.
Abstract: The ability to accurately estimate effective connectivity among brain regions from neuroimaging data could help answering many open questions in neuroscience. We propose a method which uses causality to obtain a measure of effective connectivity from fMRI data. The method uses a vector autoregressive model for the latent variables describing neuronal activity in combination with a linear observation model based on a convolution with a hemodynamic response function. Due to the employed modeling, it is possible to efficiently estimate all latent variables of the model using a variational Bayesian inference algorithm. The computational efficiency of the method enables us to apply it to large scale problems with high sampling rates and several hundred regions of interest. We use a comprehensive empirical evaluation with synthetic and real fMRI data to evaluate the performance of our method under various conditions.

14 citations


Journal ArticleDOI
TL;DR: In this article, a customized algorithm for the colorization of historical black and white photographs documenting earlier states of paintings is described, focusing on Pablo Picasso's midcentury Mediterranean masterpiece La Joie de Vivre, 1946 (Musee Picasso, Antibes, France).
Abstract: This paper describes the use of a customized algorithm for the colorization of historical black and white photographs documenting earlier states of paintings. This study specifically focuses on Pablo Picasso's mid-century Mediterranean masterpiece La Joie de Vivre, 1946 (Musee Picasso, Antibes, France). The custom-designed algorithm allows computer-controlled spreading of color information on a digital image of black and white historical photographs to obtain accurate color renditions. Expert observation of the present state of the painting, coupled with stratigraphic information from cross sections allows the attribution of color information to selected pixels in the digitized images. The algorithm uses the localized color information and the grayscale intensities of the black and white historical photographs to formulate a set of equations for the missing color values of the remaining pixels. The computational resolution of such equations allows an accurate colorization that preserves brushwork ...

13 citations


Proceedings ArticleDOI
13 Nov 2014
TL;DR: This work shows how the introduction in the SG distribution of a global strength (not necessary scale) parameter can be used to improve the quality of the obtained restorations as well as to introduce additional information on the global weight of the prior.
Abstract: Super Gaussian (SG) distributions have proven to be very powerful prior models to induce sparsity in Bayesian Blind Deconvolution (BD) problems. Their conjugate based representations make them specially attractive when Variational Bayes (VB) inference is used since their variational parameters can be calculated in closed form with the sole knowledge of the energy function of the prior model. In this work we show how the introduction in the SG distribution of a global strength (not necessary scale) parameter can be used to improve the quality of the obtained restorations as well as to introduce additional information on the global weight of the prior. A model to estimate the new unknown parameter within the Bayesian framework is provided. Experimental results, on both synthetic and real images, demonstrate the effectiveness of the proposed approach.

9 citations


Proceedings Article
13 Nov 2014
TL;DR: This method combines the sparsity-based approaches with additional least-squares steps and exhitbits robustness to outliers achieving significant performance improvement with little additional cost.
Abstract: In this paper we present a novel approach to face recognition. We propose an adaptation and extension to the state-of-the-art methods in face recognition, such as sparse representation-based classification and its extensions. Effectively, our method combines the sparsity-based approaches with additional least-squares steps and exhitbits robustness to outliers achieving significant performance improvement with little additional cost. This approach also mitigates the need for a large number of training images since it proves robust to varying number of training samples.

Journal ArticleDOI
TL;DR: A novel Bayesian image restoration method based on a combination of priors that combines TV and PSI models, which preserves the image textures and is competitive with state-of-the-art restoration methods.

Proceedings ArticleDOI
22 Jun 2014
TL;DR: In this paper, a novel directionally adaptive cubic-spline interpolation method was proposed to enlarge images with reduced interpolation artifacts compared with both conventional and advanced interpolation methods.
Abstract: This paper presents a novel directionally adaptive cubic-spline interpolation method which is applicable to mobile camera digital zoom systems. The problems of conventional (linear and cubic-spline) and advanced interpolation exhibit blurring and jagging artifacts in the digitally zoomed image. To solve this problem, the proposed method performs directionally adaptive interpolation using the optimal interpolation kernel according to the edge orientation. Experimental results show that the proposed method successfully enlarges images with reduced interpolation artifacts compared with both conventional and advanced interpolation methods. Objective evaluation reveals that the proposed method gives higher peak signal to noise ratio (PSNR) and structural similarity (SSIM) figures.

Proceedings Article
13 Nov 2014
TL;DR: The proposed Bayesian framework is applied to Image Segmentation problems on both synthetic and real datasets, showing higher accuracy than state-of-the-art approaches.
Abstract: In this paper we utilize Bayesian modeling and inference to learn a softmax classification model which performs Supervised Classification and Active Learning. For p <; 1, lp-priors are used to impose sparsity on the adaptive parameters. Using variational inference, all model parameters are estimated and the posterior probabilities of the classes given the samples are calculated. A relationship between the prior model used and the independent Gaussian prior model is provided. The posterior probabilities are used to classify new samples and to define two Active Learning methods to improve classifier performance: Minimum Probability and Maximum Entropy. In the experimental section the proposed Bayesian framework is applied to Image Segmentation problems on both synthetic and real datasets, showing higher accuracy than state-of-the-art approaches.

Proceedings ArticleDOI
22 Jun 2014
TL;DR: The proposed defocus-invariant image registration method can accurately estimate the difference of phase between two out-of-focus images, and can be applied to phase-difference detection auto focusing, and provide accurate auto focusing performance.
Abstract: This paper presents a defocus-invariant image registration method for measuring the shifting value between two differently located patterns in an imaging sensor. Existing registration methods fail with unfocused images since features or regions of interest are degraded by defocus. In order to solve this problem, the proposed method consists of three stages: i) pre-generation of the set of point spread functions (PSFs) estimated in different focusing positions, ii) the geometric transformation estimation using estimated PSF data, and iii) registration using estimated transformation matrix. The proposed method improves out-of-focus degradation through estimation of PSF. For this reason, the proposed method can accurately estimate the difference of phase between two out-of-focus images. Furthermore, it can be applied to phase-difference detection auto focusing, and provide accurate auto focusing performance.

Journal ArticleDOI
TL;DR: A Bayesian based algorithm to recover sparse signals from compressed noisy measurements in the presence of a smooth background component is proposed and its advantage over the current state-of-the-art solutions is demonstrated.
Abstract: We propose a Bayesian based algorithm to recover sparse signals from compressed noisy measurements in the presence of a smooth background component. This problem is closely related to robust principal component analysis and compressive sensing, and is found in a number of practical areas. The proposed algorithm adopts a hierarchical Bayesian framework for modeling, and employs approximate inference to estimate the unknowns. Numerical examples demonstrate the effectiveness of the proposed algorithm and its advantage over the current state-of-the-art solutions.

Proceedings Article
13 Nov 2014
TL;DR: This paper proposes two algorithmic solutions that exploit the signal temporal properties to improve the reconstruction accuracy and the effectiveness of the proposed algorithms is corroborated with experimental results.
Abstract: In this paper we consider the problem of recovering temporally smooth or correlated sparse signals from a set of undersampled measurements. We propose two algorithmic solutions that exploit the signal temporal properties to improve the reconstruction accuracy. The effectiveness of the proposed algorithms is corroborated with experimental results.

Proceedings ArticleDOI
22 Jun 2014
TL;DR: The proposed algorithm consists of mosaicking for covering the missing static, planar regions, estimation of local motion vectors using the hierarchical Lucas-Kanade optical flow method, and selection of the most similar patch in both spatial and temporal neighbors.
Abstract: This paper presents a video completion algorithm using block matching for video stabilization. In order to fill in missing pixels, the proposed algorithm consists of three steps: i) mosaicking for covering the missing static, planar regions, ii) estimation of local motion vectors using the hierarchical Lucas-Kanade optical flow method, and iii) selection of the most similar patch in both spatial and temporal neighbors. The proposed video completion algorithm can be applied in the wide areas of consumer electronics including camcorders, smart phone cameras, tablet cameras, and smart glasses.

Proceedings ArticleDOI
01 Oct 2014
TL;DR: This work utilizes Bayesian modeling and inference to jointly learn a classifier and estimate an optimal filterbank, and shows that the proposed method compares favorably with other classification/filtering approaches, without the need of parameter tuning.
Abstract: Many real classification tasks are oriented to sequence (neighbor) labeling, that is, assigning a label to every sample of a signal while taking into account the sequentiality (or neighborhood) of the samples. This is normally approached by first filtering the data and then performing classification. In consequence, both processes are optimized separately, with no guarantee of global optimality. In this work we utilize Bayesian modeling and inference to jointly learn a classifier and estimate an optimal filterbank. Variational Bayesian inference is used to approximate the posterior distributions of all unknowns, resulting in an iterative procedure to estimate the classifier parameters and the filterbank coefficients. In the experimental section we show, using synthetic and real data, that the proposed method compares favorably with other classification/filtering approaches, without the need of parameter tuning.