scispace - formally typeset
R

Ruslan Salakhutdinov

Researcher at Carnegie Mellon University

Publications -  457
Citations -  142495

Ruslan Salakhutdinov is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 107, co-authored 410 publications receiving 115921 citations. Previous affiliations of Ruslan Salakhutdinov include Carnegie Learning & University of Toronto.

Papers
More filters
Posted Content

Learning with the Weighted Trace-norm under Arbitrary Sampling Distributions

TL;DR: The standard weighted-trace norm might fail when the sampling distribution is not a product distribution, and a corrected variant is presented for which strong learning guarantees are established and it is suggested that even if the true distribution is known (or is uniform), weighting by the empirical distribution may be beneficial.
Proceedings Article

Active Neural Localization

TL;DR: Active Neural Localizer as mentioned in this paper incorporates ideas of traditional filtering-based localization methods by using a structured belief of the state with multiplicative interactions to propagate belief, and combines it with a policy model to localize accurately while minimizing the number of steps required for localization.
Proceedings Article

A Multiplicative Model for Learning Distributed Text-Based Attribute Representations

TL;DR: A third-order model where word context and attribute vectors interact multiplicatively to predict the next word in a sequence is described, leading to the notion of conditional word similarity: how meanings of words change when conditioned on different attributes.
Proceedings Article

Cardinality Restricted Boltzmann Machines

TL;DR: It is shown that a dynamic programming algorithm can be used to implement exact sparsity in the RBM's hidden units and how to pass derivatives through the resulting posterior marginals, which makes it possible to fine-tune a pre-trained neural network with sparse hidden layers.
Proceedings Article

Reinforcement Learning with General Value Function Approximation: Provably Efficient Approach via Bounded Eluder Dimension

TL;DR: In this paper, the regret bound for RL with general value function approximation was shown to be O(widetilde{O}(mathrm{poly}(dH) \sqrt{T}) where H is the planning horizon, and T is the number interactions with the environment.