scispace - formally typeset
A

Andrew Y. Ng

Researcher at Stanford University

Publications -  356
Citations -  184387

Andrew Y. Ng is an academic researcher from Stanford University. The author has contributed to research in topics: Deep learning & Supervised learning. The author has an hindex of 130, co-authored 345 publications receiving 164995 citations. Previous affiliations of Andrew Y. Ng include Max Planck Society & Baidu.

Papers
More filters
Proceedings Article

Learning Syntactic Patterns for Automatic Hypernym Discovery

TL;DR: This paper presents a new algorithm for automatically learning hypernym (is-a) relations from text, using "dependency path" features extracted from parse trees and introduces a general-purpose formalization and generalization of these patterns.
Proceedings Article

Building high-level features using large scale unsupervised learning

TL;DR: In this paper, a 9-layered locally connected sparse autoencoder with pooling and local contrast normalization was used to learn high-level, class-specific feature detectors from only unlabeled data.
Posted Content

CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison

TL;DR: CheXpert as discussed by the authors is a large dataset of chest radiographs of 65,240 patients annotated by 3 board-certified radiologists with 14 observations in radiology reports, capturing uncertainties inherent in radiograph interpretation and different approaches to using the uncertainty labels for training convolutional neural networks that output the probability of these observations given the available frontal and lateral radiographs.
Journal ArticleDOI

Simultaneous Localization and Mapping with Sparse Extended Information Filters

TL;DR: It is shown that when represented in the information form, map posteriors are dominated by a small number of links that tie together nearby features in the map, which is developed into a sparse variant of the EIF, called the sparse extended information filter (SEIF).
Proceedings ArticleDOI

Large-scale deep unsupervised learning using graphics processors

TL;DR: It is argued that modern graphics processors far surpass the computational capabilities of multicore CPUs, and have the potential to revolutionize the applicability of deep unsupervised learning methods.