scispace - formally typeset
Search or ask a question
Author

Yannick Berthoumieu

Bio: Yannick Berthoumieu is an academic researcher from University of Bordeaux. The author has contributed to research in topics: Gaussian & Covariance. The author has an hindex of 22, co-authored 161 publications receiving 1864 citations. Previous affiliations of Yannick Berthoumieu include Total S.A. & Bogor Agricultural University.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper proposes to affine transform the covariance matrices of every session/subject in order to center them with respect to a reference covariance matrix, making data from different sessions/subjects comparable, providing a significant improvement in the BCI transfer learning problem.
Abstract: Objective: This paper tackles the problem of transfer learning in the context of electroencephalogram (EEG)-based brain–computer interface (BCI) classification. In particular, the problems of cross-session and cross-subject classification are considered. These problems concern the ability to use data from previous sessions or from a database of past users to calibrate and initialize the classifier, allowing a calibration-less BCI mode of operation. Methods: Data are represented using spatial covariance matrices of the EEG signals, exploiting the recent successful techniques based on the Riemannian geometry of the manifold of symmetric positive definite (SPD) matrices. Cross-session and cross-subject classification can be difficult, due to the many changes intervening between sessions and between subjects, including physiological, environmental, as well as instrumental changes. Here, we propose to affine transform the covariance matrices of every session/subject in order to center them with respect to a reference covariance matrix, making data from different sessions/subjects comparable. Then, classification is performed both using a standard minimum distance to mean classifier, and through a probabilistic classifier recently developed in the literature, based on a density function (mixture of Riemannian Gaussian distributions) defined on the SPD manifold. Results: The improvements in terms of classification performances achieved by introducing the affine transformation are documented with the analysis of two BCI datasets. Conclusion and significance: Hence, we make, through the affine transformation proposed, data from different sessions and subject comparable, providing a significant improvement in the BCI transfer learning problem.

241 citations

Journal ArticleDOI
TL;DR: It is proved that the maximum likelihood estimator (MLE) of the scatter matrix exists and is unique up to a scalar factor, for a given shape parameter β ∈ (0,1).
Abstract: Due to its heavy-tailed and fully parametric form, the multivariate generalized Gaussian distribution (MGGD) has been receiving much attention in signal and image processing applications. Considering the estimation issue of the MGGD parameters, the main contribution of this paper is to prove that the maximum likelihood estimator (MLE) of the scatter matrix exists and is unique up to a scalar factor, for a given shape parameter β ∈ (0,1). Moreover, an estimation algorithm based on a Newton-Raphson recursion is proposed for computing the MLE of MGGD parameters. Various experiments conducted on synthetic and real data are presented to illustrate the theoretical derivations in terms of number of iterations and number of samples for different values of the shape parameter. The main conclusion of this work is that the parameters of MGGDs can be estimated using the maximum likelihood principle with good performance.

130 citations

Proceedings ArticleDOI
07 Nov 2009
TL;DR: This paper presents Asymmetric Generalized Gaussian density as a model to represent detail subbands resulting from multiscale decomposition and indicates that this model achieves higher recognition rates than the conventional approach of using the Generalization Gaussian model where asymmetry was not considered.
Abstract: This paper deals with texture analysis based on multiscale stochastic modeling. In contrast to common approaches using symmetric marginal probability density functions of subband coefficients, experimental manipulations show that the symmetric shape assumption is violated for several texture classes. From this fact, we propose in this paper to exploit this shape property to improve texture characterization. We present Asymmetric Generalized Gaussian density as a model to represent detail subbands resulting from multiscale decomposition. A fast estimation method is presented and closed-form of Kullback-Leibler divergence is provided in order to validate the model into a retrieval scheme. The experimental results indicate that this model achieves higher recognition rates than the conventional approach of using the Generalized Gaussian model where asymmetry was not considered.

127 citations

Journal ArticleDOI
TL;DR: In this paper, a Riemannian Gaussian distribution was proposed for the classification of data in the space of symmetric positive definite matrices. But the distribution was not defined in terms of the probability density function.
Abstract: Data, which lie in the space $\mathcal {P}_{m\,}$ , of $m \times m$ symmetric positive definite matrices, (sometimes called tensor data ), play a fundamental role in applications, including medical imaging, computer vision, and radar signal processing. An open challenge, for these applications, is to find a class of probability distributions, which is able to capture the statistical properties of data in $\mathcal {P}_{m\,}$ , as they arise in real-world situations. The present paper meets this challenge by introducing Riemannian Gaussian distributions on $\mathcal {P}_{m\,}$ . Distributions of this kind were first considered by Pennec in 2006. However, the present paper gives an exact expression of their probability density function for the first time in existing literature. This leads to two original contributions. First, a detailed study of statistical inference for Riemannian Gaussian distributions, uncovering the connection between the maximum likelihood estimation and the concept of Riemannian centre of mass, widely used in applications. Second, the derivation and the implementation of an expectation-maximisation algorithm, for the estimation of mixtures of Riemannian Gaussian distributions. The paper applies this new algorithm, to the classification of data in $\mathcal {P}_{m\,}$ , (concretely, to the problem of texture classification, in computer vision), showing that it yields significantly better performance, in comparison to recent approaches.

104 citations

Journal ArticleDOI
TL;DR: In the framework of texture image retrieval, a new family of stochastic multivariate modeling is proposed based on Gaussian Copula and wavelet decompositions that takes advantage of the copula paradigm to separate dependence structure from marginal behavior.
Abstract: In the framework of texture image retrieval, a new family of stochastic multivariate modeling is proposed based on Gaussian Copula and wavelet decompositions. We take advantage of the copula paradigm, which makes it possible to separate dependence structure from marginal behavior. We introduce two new multivariate models using, respectively, generalized Gaussian and Weibull densities. These models capture both the subband marginal distributions and the correlation between wavelet coefficients. We derive, as a similarity measure, a closed form expression of the Jeffrey divergence between Gaussian copula-based multivariate models. Experimental results on well-known databases show significant improvements in retrieval rates using the proposed method compared with the best known state-of-the-art approaches.

101 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: Despite its simplicity, it is able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms.
Abstract: We propose a natural scene statistic-based distortion-generic blind/no-reference (NR) image quality assessment (IQA) model that operates in the spatial domain. The new model, dubbed blind/referenceless image spatial quality evaluator (BRISQUE) does not compute distortion-specific features, such as ringing, blur, or blocking, but instead uses scene statistics of locally normalized luminance coefficients to quantify possible losses of “naturalness” in the image due to the presence of distortions, thereby leading to a holistic measure of quality. The underlying features used derive from the empirical distribution of locally normalized luminances and products of locally normalized luminances under a spatial natural scene statistic model. No transformation to another coordinate frame (DCT, wavelet, etc.) is required, distinguishing it from prior NR IQA approaches. Despite its simplicity, we are able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms. BRISQUE has very low computational complexity, making it well suited for real time applications. BRISQUE features may be used for distortion-identification as well. To illustrate a new practical application of BRISQUE, we describe how a nonblind image denoising algorithm can be augmented with BRISQUE in order to perform blind image denoising. Results show that BRISQUE augmentation leads to performance improvements over state-of-the-art methods. A software release of BRISQUE is available online: http://live.ece.utexas.edu/research/quality/BRISQUE_release.zip for public use and evaluation.

3,780 citations

Journal ArticleDOI
TL;DR: This work has recently derived a blind IQA model that only makes use of measurable deviations from statistical regularities observed in natural images, without training on human-rated distorted images, and, indeed, without any exposure to distorted images.
Abstract: An important aim of research on the blind image quality assessment (IQA) problem is to devise perceptual models that can predict the quality of distorted images with as little prior knowledge of the images or their distortions as possible. Current state-of-the-art “general purpose” no reference (NR) IQA algorithms require knowledge about anticipated distortions in the form of training examples and corresponding human opinion scores. However we have recently derived a blind IQA model that only makes use of measurable deviations from statistical regularities observed in natural images, without training on human-rated distorted images, and, indeed without any exposure to distorted images. Thus, it is “completely blind.” The new IQA model, which we call the Natural Image Quality Evaluator (NIQE) is based on the construction of a “quality aware” collection of statistical features based on a simple and successful space domain natural scene statistic (NSS) model. These features are derived from a corpus of natural, undistorted images. Experimental results show that the new index delivers performance comparable to top performing NR IQA models that require training on large databases of human opinions of distorted images. A software release is available at http://live.ece.utexas.edu/research/quality/niqe_release.zip.

3,722 citations