R
Ruslan Salakhutdinov
Researcher at Carnegie Mellon University
Publications - 457
Citations - 142495
Ruslan Salakhutdinov is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 107, co-authored 410 publications receiving 115921 citations. Previous affiliations of Ruslan Salakhutdinov include Carnegie Learning & University of Toronto.
Papers
More filters
Proceedings Article
Efficient Learning of Deep Boltzmann Machines
TL;DR: A new approximate inference algorithm for Deep Boltzmann Machines (DBM’s), a generative model with many layers of hidden variables, that learns a separate “recognition” model that is used to quickly initialize, in a single bottom-up pass, the values of the latent variables in all hidden layers.
Journal ArticleDOI
Learning deep generative models
TL;DR: The aim of the thesis is to demonstrate that deep generative models that contain many layers of latent variables and millions of parameters can be learned efficiently, and that the learned high-level feature representations can be successfully applied in a wide spectrum of application domains, including visual object recognition, information retrieval, and classification and regression tasks.
Proceedings Article
Deep Sets
Manzil Zaheer,Satwik Kottur,Siamak Ravanbakhsh,Barnabás Póczos,Ruslan Salakhutdinov,Alexander J. Smola +5 more
TL;DR: In this paper, the authors study the problem of designing models for machine learning tasks defined on sets and provide a family of functions to which any permutation invariant objective function must belong.
Proceedings Article
Deep learning for neuroimaging: A validation study
TL;DR: In this article, a constraint-based approach to visualizing high dimensional data is proposed to analyze the effect of parameter choices on data transformations and show that deep learning methods are able to learn physiologically important representations and detect latent relations in neuroimaging data.
Posted Content
Multimodal Transformer for Unaligned Multimodal Language Sequences
Yao-Hung Hubert Tsai,Shaojie Bai,Paul Pu Liang,J. Zico Kolter,J. Zico Kolter,Louis-Philippe Morency,Ruslan Salakhutdinov +6 more
TL;DR: Comprehensive experiments on both aligned and non-aligned multimodal time-series show that the MulT model outperforms state-of-the-art methods by a large margin, and empirical analysis suggests that correlated crossmodal signals are able to be captured by the proposed cross modal attention mechanism in MulT.