scispace - formally typeset
A

Andrew Y. Ng

Researcher at Stanford University

Publications -  356
Citations -  184387

Andrew Y. Ng is an academic researcher from Stanford University. The author has contributed to research in topics: Deep learning & Supervised learning. The author has an hindex of 130, co-authored 345 publications receiving 164995 citations. Previous affiliations of Andrew Y. Ng include Max Planck Society & Baidu.

Papers
More filters
Proceedings ArticleDOI

Grasping with application to an autonomous checkout robot

TL;DR: A novel grasp selection algorithm is presented to enable a robot with a two-fingered end-effector to autonomously grasp unknown objects and is used to demonstrate the application of a robot as an autonomous checkout clerk.
Proceedings Article

Exponential family sparse coding with applications to self-taught learning

TL;DR: This work presents an algorithm for solving the L1- regularized optimization problem defined by this model, and shows that it is especially efficient when the optimal solution is sparse, and results in significantly improved self-taught learning performance.
Proceedings Article

Learning to Merge Word Senses

TL;DR: A discriminative classifier is trained over a wide variety of features derived from WordNet structure, corpus-based evidence, and evidence from other lexical resources, and a learned similarity measure outperforms previously proposed automatic methods for sense clustering on the task of predicting human sense merging judgments.
Posted Content

MoCo-CXR: MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models

TL;DR: This study demonstrates that MoCo-pretraining provides high-quality representations and transferable initializations for chest X-ray interpretation and suggests that pretraining on unlabeled X-rays can provide transfer learning benefits for a target task.

Shaping and policy search in reinforcement learning

TL;DR: A theory of reward shaping is given that shows how poorly chosen shaping rewards can be eliminated and guidelines for selecting good shaping rewards that in practice give significant speedups of the learning process are given.