scispace - formally typeset
R

Ruslan Salakhutdinov

Researcher at Carnegie Mellon University

Publications -  457
Citations -  142495

Ruslan Salakhutdinov is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 107, co-authored 410 publications receiving 115921 citations. Previous affiliations of Ruslan Salakhutdinov include Carnegie Learning & University of Toronto.

Papers
More filters
Proceedings ArticleDOI

Multimodal Routing: Improving Local and Global Interpretability of Multimodal Language Analysis.

TL;DR: This paper proposes Multimodal Routing, which dynamically adjusts weights between input modalities and output representations differently for each input sample, mean-while keeping competitive performance compared to state-of-the-art methods.
Proceedings Article

On Multiplicative Integration with Recurrent Neural Networks

TL;DR: In this paper, a general simple structural design called multiplicative integration (MI) is introduced to improve recurrent neural networks (RNNs) by changing the way of how the information flow gets integrated in the computational building block of an RNN, while introducing almost no extra parameters.
Proceedings ArticleDOI

Towards Debiasing Sentence Representations

TL;DR: This article investigated the presence of social biases in sentence-level representations and proposed a new method, Sent-Debias, to reduce these biases, which is effective in removing biases, and at the same time, preserves performance on sentencelevel downstream tasks such as sentiment analysis, linguistic acceptability, and natural language understanding.
Proceedings Article

On Reward-Free Reinforcement Learning with Linear Function Approximation

TL;DR: An algorithm for reward-free RL in the linear Markov decision process setting where both the transition and the reward admit linear representations is given, and the sample complexity is polynomial in the feature dimension and the planning horizon, and is completely independent of the number of states and actions.
Posted Content

Deep Generative Models with Learnable Knowledge Constraints

TL;DR: In this article, a mathematical correspondence between posterior regularization (PR) and reinforcement learning (RL) is established and, based on the connection, expand PR to learn constraints as the extrinsic reward in RL.