scispace - formally typeset
R

Ruslan Salakhutdinov

Researcher at Carnegie Mellon University

Publications -  457
Citations -  142495

Ruslan Salakhutdinov is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 107, co-authored 410 publications receiving 115921 citations. Previous affiliations of Ruslan Salakhutdinov include Carnegie Learning & University of Toronto.

Papers
More filters
Posted Content

Integrating Auxiliary Information in Self-supervised Learning.

TL;DR: In this article, the authors integrate the auxiliary information (e.g., additional attributes for data such as the hashtags for Instagram images) in the self-supervised learning process.
Journal ArticleDOI

Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications

TL;DR: In this article , the authors derived lower and upper bounds to quantify the amount of multimodal interactions in a semi-supervised setting with only labeled unimodal data and naturally co-occurring multimodial data (e.g., unlabeled images and captions, video and corresponding audio).
Proceedings Article

Contrastive Example-Based Control

TL;DR: In this paper , an implicit model of multi-step transitions is proposed to represent the Q-values for the example-based control problem, which can directly be used to determine these good actions.
Posted Content

Robustness and Generalization to Nearest Categories

TL;DR: In this paper, the authors show that robust networks perform well in some out-of-distribution generalization tasks, such as transfer learning and outlier detection, and find that they also do well in a task that they call nearest category generalization.
Posted Content

The Information Geometry of Unsupervised Reinforcement Learning

TL;DR: The authors show that the distribution over skills provides an optimal initialization minimizing regret against adversarially-chosen reward functions, assuming a certain type of adaptation procedure. But they do not show that these algorithms are optimal for every possible reward function.