scispace - formally typeset
A

Andrew Y. Ng

Researcher at Stanford University

Publications -  356
Citations -  184387

Andrew Y. Ng is an academic researcher from Stanford University. The author has contributed to research in topics: Deep learning & Supervised learning. The author has an hindex of 130, co-authored 345 publications receiving 164995 citations. Previous affiliations of Andrew Y. Ng include Max Planck Society & Baidu.

Papers
More filters
Proceedings Article

Learning Word Vectors for Sentiment Analysis

TL;DR: This work presents a model that uses a mix of unsupervised and supervised techniques to learn word vectors capturing semantic term--document information as well as rich sentiment content, and finds it out-performs several previously introduced methods for sentiment classification.
Proceedings Article

Large Scale Distributed Deep Networks

TL;DR: This paper considers the problem of training a deep network with billions of parameters using tens of thousands of CPU cores and develops two algorithms for large-scale distributed training, Downpour SGD and Sandblaster L-BFGS, which increase the scale and speed of deep network training.
Proceedings Article

Distance Metric Learning with Application to Clustering with Side-Information

TL;DR: This paper presents an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in �”n, learns a distance metric over ℝn that respects these relationships.
Proceedings ArticleDOI

Apprenticeship learning via inverse reinforcement learning

TL;DR: This work thinks of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and gives an algorithm for learning the task demonstrated by the expert, based on using "inverse reinforcement learning" to try to recover the unknown reward function.
Proceedings Article

Multimodal Deep Learning

TL;DR: This work presents a series of tasks for multimodal learning and shows how to train deep networks that learn features to address these tasks, and demonstrates cross modality feature learning, where better features for one modality can be learned if multiple modalities are present at feature learning time.