scispace - formally typeset
H

Honglak Lee

Researcher at University of Michigan

Publications -  282
Citations -  49905

Honglak Lee is an academic researcher from University of Michigan. The author has contributed to research in topics: Reinforcement learning & Deep learning. The author has an hindex of 86, co-authored 260 publications receiving 41786 citations. Previous affiliations of Honglak Lee include University of Massachusetts Amherst & Stanford University.

Papers
More filters
Proceedings Article

Multimodal Deep Learning

TL;DR: This work presents a series of tasks for multimodal learning and shows how to train deep networks that learn features to address these tasks, and demonstrates cross modality feature learning, where better features for one modality can be learned if multiple modalities are present at feature learning time.
Proceedings Article

Efficient sparse coding algorithms

TL;DR: These algorithms are applied to natural images and it is demonstrated that the inferred sparse codes exhibit end-stopping and non-classical receptive field surround suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons.
Proceedings ArticleDOI

Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations

TL;DR: The convolutional deep belief network is presented, a hierarchical generative model which scales to realistic image sizes and is translation-invariant and supports efficient bottom-up and top-down probabilistic inference.
Proceedings Article

Learning structured output representation using deep conditional generative models

TL;DR: A deep conditional generative model for structured output prediction using Gaussian latent variables is developed, trained efficiently in the framework of stochastic gradient variational Bayes, and allows for fast prediction using Stochastic feed-forward inference.
Proceedings Article

An analysis of single-layer networks in unsupervised feature learning

TL;DR: In this paper, the authors show that the number of hidden nodes in the model may be more important to achieving high performance than the learning algorithm or the depth of the model, and they apply several othe-shelf feature learning algorithms (sparse auto-encoders, sparse RBMs, K-means clustering, and Gaussian mixtures) to CIFAR, NORB, and STL datasets using only single-layer networks.