L
Loic Matthey
Researcher at Google
Publications - 30
Citations - 6831
Loic Matthey is an academic researcher from Google. The author has contributed to research in topics: Feature learning & Reinforcement learning. The author has an hindex of 21, co-authored 30 publications receiving 4899 citations. Previous affiliations of Loic Matthey include École Polytechnique Fédérale de Lausanne & University College London.
Papers
More filters
Proceedings Article
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Irina Higgins,Loic Matthey,Arka Pal,Christopher P. Burgess,Xavier Glorot,Matthew Botvinick,Shakir Mohamed,Alexander Lerchner +7 more
TL;DR: In this article, a modification of the variational autoencoder (VAE) framework is proposed to learn interpretable factorised latent representations from raw image data in a completely unsupervised manner.
Understanding disentangling in β-VAE
Christopher P. Burgess,Irina Higgins,Arka Pal,Loic Matthey,Nicholas Watters,Guillaume Desjardins,Alexander Lerchner +6 more
TL;DR: A modification to the training regime of β-VAE is proposed, that progressively increases the information capacity of the latent code during training, to facilitate the robust learning of disentangled representations in β- VAE, without the previous trade-off in reconstruction accuracy.
Posted Content
Understanding disentangling in $\beta$-VAE
Christopher P. Burgess,Irina Higgins,Arka Pal,Loic Matthey,Nicholas Watters,Guillaume Desjardins,Alexander Lerchner +6 more
TL;DR: A modification to the training regime of $\ beta$-VAE is proposed, that progressively increases the information capacity of the latent code during training, to facilitate the robust learning of disentangled representations in $\beta$- VAE, without the previous trade-off in reconstruction accuracy.
Posted Content
MONet: Unsupervised Scene Decomposition and Representation
Christopher P. Burgess,Loic Matthey,Nicholas Watters,Rishabh Kabra,Irina Higgins,Matthew Botvinick,Alexander Lerchner +6 more
TL;DR: The Multi-Object Network (MONet) is developed, which is capable of learning to decompose and represent challenging 3D scenes into semantically meaningful components, such as objects and background elements.
Posted Content
Towards a Definition of Disentangled Representations
Irina Higgins,David Amos,David Pfau,Sébastien Racanière,Loic Matthey,Danilo Jimenez Rezende,Alexander Lerchner +6 more
TL;DR: It is suggested that those transformations that change only some properties of the underlying world state, while leaving all other properties invariant are what gives exploitable structure to any kind of data.