scispace - formally typeset
A

Andrew Y. Ng

Researcher at Stanford University

Publications -  356
Citations -  184387

Andrew Y. Ng is an academic researcher from Stanford University. The author has contributed to research in topics: Deep learning & Supervised learning. The author has an hindex of 130, co-authored 345 publications receiving 164995 citations. Previous affiliations of Andrew Y. Ng include Max Planck Society & Baidu.

Papers
More filters
Proceedings ArticleDOI

VisualCheXbert: Addressing the Discrepancy Between Radiology Report Labels and Image Labels.

TL;DR: In this article, the authors used a BERT model to directly map from a radiology report to the image labels, with a supervisory signal determined by a computer vision model trained to detect medical conditions from chest X-ray images.
Posted Content

Data augmentation with Mobius transformations

TL;DR: It is shown that the inclusion of Möbius transformations during training enables improved generalization over prior sample-level data augmentation techniques such as cutout and standard crop-and-flip transformations, most notably in low data regimes.
Posted Content

Evaluating the Disentanglement of Deep Generative Models through Manifold Topology

TL;DR: This work presents a method for quantifying disentanglement that only uses the generative model, by measuring the topological similarity of conditional submanifolds in the learned representation and finds that this method ranks models similarly to existing methods.
Posted Content

DLBCL-Morph: Morphological features computed using deep learning for an annotated digital DLBCL image set

TL;DR: A morphologic analysis of histology sections from 209 DLBCL cases with associated clinical and cytogenetic data suggests that geometric features computed from tumor nuclei are of prognostic importance, and should be validated in prospective studies.
Book ChapterDOI

Reinforcement learning and apprenticeship learning for robotic control

TL;DR: Many robotic control problems, such as autonomous helicopter flight, legged robot locomotion, and autonomous driving, remain challenging even for modern reinforcement learning algorithms, but when allowed to learn from a human demonstration of a task, a number of efficient algorithms can be used to address each of these problems.