N
Nicolas Le Roux
Researcher at Google
Publications - 87
Citations - 8569
Nicolas Le Roux is an academic researcher from Google. The author has contributed to research in topics: Reinforcement learning & Hessian matrix. The author has an hindex of 26, co-authored 84 publications receiving 7724 citations. Previous affiliations of Nicolas Le Roux include Centre national de la recherche scientifique & École Normale Supérieure.
Papers
More filters
Proceedings Article
Out-of-Sample Extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering
Yoshua Bengio,Jean-françcois Paiement,Pascal Vincent,Olivier Delalleau,Nicolas Le Roux,Marie Claude Ouimet +5 more
TL;DR: A unified framework for extending Local Linear Embedding, Isomap, Laplacian Eigenmaps, Multi-Dimensional Scaling as well as for Spectral Clustering is provided.
Journal ArticleDOI
Representational power of restricted boltzmann machines and deep belief networks
Nicolas Le Roux,Yoshua Bengio +1 more
TL;DR: This work proves that adding hidden units yields strictly improved modeling power, while a second theorem shows that RBMs are universal approximators of discrete distributions and suggests a new and less greedy criterion for training RBMs within DBNs.
Proceedings Article
A Stochastic Gradient Method with an Exponential Convergence _Rate for Finite Training Sets
TL;DR: In this paper, a new stochastic gradient method was proposed to optimize the sum of a finite set of smooth functions, where the sum is strongly convex, with a memory of previous gradient values in order to achieve a linear convergence rate.
Journal ArticleDOI
Minimizing finite sums with the stochastic average gradient
TL;DR: In this paper, the stochastic average gradient (SAG) method is used to optimize the sum of a finite number of smooth convex functions, which achieves a faster convergence rate than black-box SG methods.
Posted Content
Minimizing Finite Sums with the Stochastic Average Gradient
TL;DR: In this paper, the stochastic average gradient (SAG) method was proposed to optimize the sum of a finite number of smooth convex functions, which achieves a faster convergence rate than black-box SG methods.