scispace - formally typeset
L

Laurens van der Maaten

Researcher at Facebook

Publications -  127
Citations -  79845

Laurens van der Maaten is an academic researcher from Facebook. The author has contributed to research in topics: Computer science & Network architecture. The author has an hindex of 47, co-authored 118 publications receiving 54188 citations. Previous affiliations of Laurens van der Maaten include Maastricht University & Delft University of Technology.

Papers
More filters
Journal Article

Submix: Practical Private Prediction for Large-Scale Language Models

TL;DR: This work introduces SUBMIX: a practical protocol for private next-token prediction designed to prevent privacy violations by language models that were fine-tuned on a private corpus after pre-training on a public corpus via a relaxation of group differentially private prediction.
Posted Content

Separating Self-Expression and Visual Content in Hashtag Supervision

TL;DR: An approach that extends upon modeling simple image-label pairs with a joint model of images, hashtags, and users is presented, demonstrating the efficacy of such approaches in image tagging and retrieval experiments, and showing how the joint model can be used to perform user-conditional retrieval and tagging.
Posted Content

Privacy-Preserving Contextual Bandits.

TL;DR: A privacy-preserving contextual bandit algorithm that combines secure multi-party computation with a differential private mechanism based on epsilon-greedy exploration in contextual bandits is developed.
Posted Content

Evaluating Text-to-Image Matching using Binary Image Selection (BISON)

TL;DR: In this article, the Binary Image SelectiON (BISON) dataset was used to evaluate text-based image retrieval and image captioning systems. But the system's performance was not evaluated.
Posted Content

Marginalizing Corrupted Features

TL;DR: This paper proposes a third, alternative approach to combat overfitting: extending the training set with infinitely many artificial training examples that are obtained by corrupting the original training data, called marginalized corrupted features (MCF), which trains robust predictors by minimizing the expected value of the loss function under the corruption model.