R
Ruslan Salakhutdinov
Researcher at Carnegie Mellon University
Publications - 457
Citations - 142495
Ruslan Salakhutdinov is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 107, co-authored 410 publications receiving 115921 citations. Previous affiliations of Ruslan Salakhutdinov include Carnegie Learning & University of Toronto.
Papers
More filters
Posted Content
Reinforcement Learning with General Value Function Approximation: Provably Efficient Approach via Bounded Eluder Dimension
TL;DR: This paper establishes a provably efficient RL algorithm with general value function approximation that achieves a regret bound of $\widetilde{O}(\mathrm{poly}(dH)\sqrt{T})$ and provides a framework to justify the effectiveness of algorithms used in practice.
Posted Content
Learning Generative Models with Visual Attention
TL;DR: A deep-learning based generative framework using attention that can robustly attend to the face region of novel test subjects and can learn generative models of new faces from a novel dataset of large images where the face locations are not known.
Posted Content
Architectural Complexity Measures of Recurrent Neural Networks
Saizheng Zhang,Yuhuai Wu,Tong Che,Zhouhan Lin,Roland Memisevic,Ruslan Salakhutdinov,Yoshua Bengio +6 more
TL;DR: In this article, a graph-theoretic framework is presented to analyze the connecting architectures of RNNs and three architecture complexity measures are proposed: the recurrent depth, the feedforward depth and the recurrent skip coefficient.
Posted Content
The MineRL 2020 Competition on Sample Efficient Reinforcement Learning using Human Priors.
William H. Guss,Mario Ynocente Castro,Sam Devlin,Brandon Houghton,Noboru Sean Kuno,Crissman Loomis,Stephanie Milani,Sharada P. Mohanty,Keisuke Nakata,Ruslan Salakhutdinov,John Schulman,Shinya Shiroshita,Nicholay Topin,Avinash Ummadisingu,Oriol Vinyals +14 more
TL;DR: The MineRL Competition on Sample Efficient Reinforcement Learning using Human Priors is introduced, to foster the development of algorithms which can efficiently leverage human demonstrations to drastically reduce the number of samples needed to solve complex, hierarchical, and sparse environments.
Proceedings Article
Point Cloud GAN
TL;DR: Zhang et al. as mentioned in this paper proposed a two fold modification to GAN algorithm for learning to generate point clouds (PC-GAN), which combines ideas from hierarchical Bayesian modeling and implicit generative models by learning a hierarchical and interpretable sampling process.