scispace - formally typeset
Search or ask a question

Showing papers by "Yannick Berthoumieu published in 2017"


Journal ArticleDOI
TL;DR: In this paper, a Riemannian Gaussian distribution was proposed for the classification of data in the space of symmetric positive definite matrices. But the distribution was not defined in terms of the probability density function.
Abstract: Data, which lie in the space $\mathcal {P}_{m\,}$ , of $m \times m$ symmetric positive definite matrices, (sometimes called tensor data ), play a fundamental role in applications, including medical imaging, computer vision, and radar signal processing. An open challenge, for these applications, is to find a class of probability distributions, which is able to capture the statistical properties of data in $\mathcal {P}_{m\,}$ , as they arise in real-world situations. The present paper meets this challenge by introducing Riemannian Gaussian distributions on $\mathcal {P}_{m\,}$ . Distributions of this kind were first considered by Pennec in 2006. However, the present paper gives an exact expression of their probability density function for the first time in existing literature. This leads to two original contributions. First, a detailed study of statistical inference for Riemannian Gaussian distributions, uncovering the connection between the maximum likelihood estimation and the concept of Riemannian centre of mass, widely used in applications. Second, the derivation and the implementation of an expectation-maximisation algorithm, for the estimation of mixtures of Riemannian Gaussian distributions. The paper applies this new algorithm, to the classification of data in $\mathcal {P}_{m\,}$ , (concretely, to the problem of texture classification, in computer vision), showing that it yields significantly better performance, in comparison to recent approaches.

104 citations


Proceedings ArticleDOI
17 Sep 2017
TL;DR: A proposed clustering method falls into the family of information-maximization clustering, where mutual information between data features and cluster assignments is maximized, and is adapted to the HBS problem and extended to the case of multiple image features.
Abstract: This paper presents a new approach for unsupervised band selection in the context of hyperspectral imaging. The hyperspectral band selection (HBS) task is considered as a clustering problem: bands are clustered in the image space; one representative image is then kept for each cluster, to be part of the set of selected bands. The proposed clustering method falls into the family of information-maximization clustering, where mutual information between data features and cluster assignments is maximized. Inspired by a clustering method of this family, we adapt it to the HBS problem and extend it to the case of multiple image features. A pixel selection step is also integrated to reduce the spatial support of the feature vectors, thus mitigating the curse of dimensionality. Experiments with different standard data sets show that the bands selected with our algorithm lead to higher classification performance, in comparison with other state-of-the-art HBS methods.

15 citations


Journal ArticleDOI
TL;DR: A spectral and spatial RoF (SSRoF), to further improve the performance of RoF, where pixels are first smoothed by the multiscale (MS) spatial weight mean filtering and spectral–spatial data transformation is introduced into the RoF.
Abstract: Rotation Forest (RoF) is a recent powerful decision tree (DT) ensemble classifier of hyperspectral images. RoF exploits random feature selection and data transformation techniques to improve both the diversity and accuracy of DT classifiers. Conventional RoF only considers data transformation on spectral information. To overcome this limitation, we propose a spectral and spatial RoF (SSRoF), to further improve the performance. In SSRoF, pixels are first smoothed by the multiscale (MS) spatial weight mean filtering. Then, spectral–spatial data transformation, which is based on a joint spectral and spatial rotation matrix, is introduced into the RoF. Finally, classification results obtained from each scale are integrated by a majority voting rule. Experimental results on two datasets indicate the competitive performance of the proposed method when compared to other state-of-the-art methods.

14 citations


Journal ArticleDOI
TL;DR: This paper proposes a variational method that combines two approaches that are texture synthesis and image reconstruction that leads to a significant improvement, both visually and numerically, with respect to the state-of-the-art algorithms for solving similar problems.
Abstract: In this paper, we aim at super-resolving a low-resolution texture under the assumption that a high-resolution patch of the texture is available. To do so, we propose a variational method that combines two approaches that are texture synthesis and image reconstruction. The resulting objective function holds a nonconvex energy that involves a quadratic distance to the low-resolution image, a histogram-based distance to the high-resolution patch, and a nonlocal regularization that links the missing pixels with the patch pixels. As for the histogram-based measure, we use a sum of Wasserstein distances between the histograms of some linear transformations of the textures. The resulting optimization problem is efficiently solved with a primal-dual proximal method. Experiments show that our method leads to a significant improvement, both visually and numerically, with respect to the state-of-the-art algorithms for solving similar problems.

12 citations


Book ChapterDOI
07 Nov 2017
TL;DR: The present paper proposes a framework of asymptotic results for MLEs on manifolds: consistency, asymPTotic normality and asymptic efficiency, and extends popular testing problems on manifoldolds.
Abstract: Maximum likelihood estimator (MLE) is a well known estimator in statistics. The popularity of this estimator stems from its asymptotic and universal properties. While asymptotic properties of MLEs on Euclidean spaces attracted a lot of interest, their studies on manifolds are still insufficient. The present paper aims to give a unified study of the subject. Its contributions are twofold. First it proposes a framework of asymptotic results for MLEs on manifolds: consistency, asymptotic normality and asymptotic efficiency. Second, it extends popular testing problems on manifolds. Some examples are discussed.

5 citations


Proceedings ArticleDOI
05 Mar 2017
TL;DR: A geometric learning approach on the space of complex covariance matrices based on a new distribution called Riemannian Gaussian distribution is introduced and an application to texture recognition on the VisTex database is proposed.
Abstract: Many signal and image processing applications, including SAR polarimetry and texture analysis, require the classification of complex covariance matrices. The present paper introduces a geometric learning approach on the space of complex covariance matrices based on a new distribution called Riemannian Gaussian distribution. The proposed distribution has two parameters, the centre of mass Ȳ and the dispersion parameter σ. After having derived its maximum likelihood estimator and its extension to mixture models, we propose an application to texture recognition on the VisTex database.

4 citations


Proceedings ArticleDOI
01 Dec 2017
TL;DR: A novel algorithm for estimating parameters of a mixture of Gaussian laws when data lie in a Riemannian manifold with a slight modification of the stochastic EM algorithm developed originally for Euclidean case can also be derived for SPD manifold.
Abstract: This paper presents a novel algorithm for estimating parameters of a mixture of Gaussian laws when data lie in a Riemannian manifold. We consider the stochastic variant of the well-known Expectation-Maximization (EM) algorithm in the case of Riemannian geometry. The Riemannian mixture is devoted, here, to the case of Riemannian manifold of Symmetric Positive Definite (SPD) matrices. With a slight modification, the stochastic EM algorithm developed originally for Euclidean case can also be derived for SPD manifold. We provide some MonteCarlo numerical simulations in order to analyse, in details, the proposed algorithm in comparison with the conventional EM one.

2 citations


Book ChapterDOI
07 Nov 2017
TL;DR: A novel local model for the classification of covariance matrices: the co-occurrence matrix of covariant matrices is introduced, which exploits the spatial distribution of the patches.
Abstract: This paper introduces a novel local model for the classification of covariance matrices: the co-occurrence matrix of covariance matrices. Contrary to state-of-the-art models (BoRW, R-VLAD and RFV), this local model exploits the spatial distribution of the patches. Starting from the generative mixture model of Riemannian Gaussian distributions, we introduce this local model. An experiment on texture image classification is then conducted on the VisTex and Outex_TC000_13 databases to evaluate its potential.

2 citations


Proceedings ArticleDOI
17 Sep 2017
TL;DR: A novel framework for visual content classification using jointly local mean vectors and covariance matrices of pixel level input features and a mixture model from a mixture of a finite number of Riemannian Gaussian distributions to obtain a tractable descriptor is presented.
Abstract: This paper presents a novel framework for visual content classification using jointly local mean vectors and covariance matrices of pixel level input features. We consider local mean and covariance as realizations of a bivariate Riemannian Gaussian density lying on a product of submanifolds. We first introduce the generalized Mahalanobis distance and then we propose a formal definition of our product-spaces Gaussian distribution on Rm × SPD(m). This definition enables us to provide a mixture model from a mixture of a finite number of Riemannian Gaussian distributions to obtain a tractable descriptor. Mixture parameters are estimated from training data by exploiting an iterative Expectation-Maximization (EM) algorithm. Experiments in a texture classification task are conducted to evaluate this extended modeling on several color texture databases, namely popular Vistex, 167-Vistex and CUReT. These experiments show that our new mixture model competes with state-of-the-art on the experimented datasets.

1 citations



Book ChapterDOI
07 Nov 2017
TL;DR: In this article, it was shown that the Rao-Fisher metric of any location-scale model is a warped metric, provided that this model satisfies a natural invariance condition, the analytic expression of the sectional curvature of this metric, and the exact analytic solution of the geodesic equation of the metric.
Abstract: This paper argues that a class of Riemannian metrics, called warped metrics, plays a fundamental role in statistical problems involving location-scale models. The paper reports three new results: (i) the Rao-Fisher metric of any location-scale model is a warped metric, provided that this model satisfies a natural invariance condition, (ii) the analytic expression of the sectional curvature of this metric, (iii) the exact analytic solution of the geodesic equation of this metric. The paper applies these new results to several examples of interest, where it shows that warped metrics turn location-scale models into complete Riemannian manifolds of negative sectional curvature. This is a very suitable situation for developing algorithms which solve problems of classification and on-line estimation. Thus, by revealing the connection between warped metrics and location-scale models, the present paper paves the way to the introduction of new efficient statistical algorithms.

Posted Content
TL;DR: By revealing the connection between warped metrics and location-scale models, the present paper paves the way to the introduction of new efficient statistical algorithms.
Abstract: This paper argues that a class of Riemannian metrics, called warped metrics, plays a fundamental role in statistical problems involving location-scale models. The paper reports three new results : i) the Rao-Fisher metric of any location-scale model is a warped metric, provided that this model satisfies a natural invariance condition, ii) the analytic expression of the sectional curvature of this metric, iii) the exact analytic solution of the geodesic equation of this metric. The paper applies these new results to several examples of interest, where it shows that warped metrics turn location-scale models into complete Riemannian manifolds of negative sectional curvature. This is a very suitable situation for developing algorithms which solve problems of classification and on-line estimation. Thus, by revealing the connection between warped metrics and location-scale models, the present paper paves the way to the introduction of new efficient statistical algorithms.