scispace - formally typeset
Search or ask a question

Showing papers by "Pierre Alquier published in 2017"


Posted Content
TL;DR: In this article, a general approach to prove the concentration of variational approximations of fractional posteriors of matrix completion and Gaussian VB is proposed, and applied to two examples.
Abstract: While Bayesian methods are extremely popular in statistics and machine learning, their application to massive datasets is often challenging, when possible at all. Indeed, the classical MCMC algorithms are prohibitively slow when both the model dimension and the sample size are large. Variational Bayesian methods aim at approximating the posterior by a distribution in a tractable family. Thus, MCMC are replaced by an optimization algorithm which is orders of magnitude faster. VB methods have been applied in such computationally demanding applications as including collaborative filtering, image and video processing, NLP and text processing... However, despite very nice results in practice, the theoretical properties of these approximations are usually not known. In this paper, we propose a general approach to prove the concentration of variational approximations of fractional posteriors. We apply our theory to two examples: matrix completion, and Gaussian VB.

47 citations


Journal ArticleDOI
TL;DR: A novel prior for quantum states (density matrices) is proposed, and pseudo-Bayesian estimators of the density matrix are defined, and rates of convergence for the posterior mean are derived.

23 citations


Journal ArticleDOI
TL;DR: In this paper, an oracle inequality for a quasi-Bayesian estimator was derived for a very general class of prior distributions and showed how the prior affects the rate of convergence.
Abstract: The aim of this paper is to provide some theoretical understanding of Bayesian non-negative matrix factorization methods. We derive an oracle inequality for a quasi-Bayesian estimator. This result holds for a very general class of prior distributions and shows how the prior affects the rate of convergence. We illustrate our theoretical results with a short numerical study along with a discussion on existing implementations .

21 citations


Posted Content
TL;DR: This work obtains estimation error rates and sharp oracle inequalities for regularization procedures of the form when ||.|| is any norm, F is a convex class of functions and l is a Lipschitz loss function satisfying a Bernstein condition over F.
Abstract: We obtain estimation error rates and sharp oracle inequalities for regularization procedures of the form \begin{equation*} \hat f \in argmin_{f\in F}\left(\frac{1}{N}\sum_{i=1}^N\ell(f(X_i), Y_i)+\lambda \|f\|\right) \end{equation*} when $\|\cdot\|$ is any norm, $F$ is a convex class of functions and $\ell$ is a Lipschitz loss function satisfying a Bernstein condition over $F$. We explore both the bounded and subgaussian stochastic frameworks for the distribution of the $f(X_i)$'s, with no assumption on the distribution of the $Y_i$'s. The general results rely on two main objects: a complexity function, and a sparsity equation, that depend on the specific setting in hand (loss $\ell$ and norm $\|\cdot\|$). As a proof of concept, we obtain minimax rates of convergence in the following problems: 1) matrix completion with any Lipschitz loss function, including the hinge and logistic loss for the so-called 1-bit matrix completion instance of the problem, and quantile losses for the general case, which enables to estimate any quantile on the entries of the matrix; 2) logistic LASSO and variants such as the logistic SLOPE; 3) kernel methods, where the loss is the hinge loss, and the regularization function is the RKHS norm.

16 citations


Proceedings Article
01 Jan 2017
TL;DR: This work proposes to use non-negative matrix factorization (NMF) to build a dictionary of travelers temporal profiles and clusters based on decomposition in this dictionary rather than on the full profiles to lead to more interpretable clusters.
Abstract: We propose to use non-negative matrix factorization (NMF) to build a dictionary of travelers temporal profiles. Clustering based on decomposition in this dictionary rather than on the full profiles (as in previous works) lead to more interpretable clusters.

4 citations


Posted Content
TL;DR: Informed Sub-Sampling MCMC (ISS-MCMCMC) as mentioned in this paper is a generic and flexible approach which preserves the simplicity of the Metropolis-Hastings algorithm.
Abstract: This paper introduces a framework for speeding up Bayesian inference conducted in presence of large datasets. We design a Markov chain whose transition kernel uses an (unknown) fraction of (fixed size) of the available data that is randomly refreshed throughout the algorithm. Inspired by the Approximate Bayesian Computation (ABC) literature, the subsampling process is guided by the fidelity to the observed data, as measured by summary statistics. The resulting algorithm, Informed Sub-Sampling MCMC (ISS-MCMC), is a generic and flexible approach which, contrary to existing scalable methodologies, preserves the simplicity of the Metropolis-Hastings algorithm. Even though exactness is lost, i.e. the chain distribution approximates the posterior, we study and quantify theoretically this bias and show on a diverse set of examples that it yields excellent performances when the computational budget is limited. If available and cheap to compute, we show that setting the summary statistics as the maximum likelihood estimator is supported by theoretical arguments.

1 citations