scispace - formally typeset
J

Jared Dunnmon

Researcher at Stanford University

Publications -  57
Citations -  2059

Jared Dunnmon is an academic researcher from Stanford University. The author has contributed to research in topics: Computer science & Deep learning. The author has an hindex of 17, co-authored 48 publications receiving 1217 citations. Previous affiliations of Jared Dunnmon include Duke University.

Papers
More filters
Journal ArticleDOI

Power extraction from aeroelastic limit cycle oscillations

TL;DR: In this paper, a flexible beam with piezoelectric laminates is excited by a uniform axial flow field in a manner analogous to a flapping flag such that the system delivers power to an electrical impedance load.
Proceedings ArticleDOI

Hidden stratification causes clinically meaningful failures in machine learning for medical imaging

TL;DR: Evidence is found that hidden stratification can occur in unidentified imaging subsets with low prevalence, low label quality, subtle distinguishing features, or spurious correlates, and that it can result in relative performance differences of over 20% on clinically important subsets.
Proceedings Article

Learning to Compose Domain-Specific Transformations for Data Augmentation.

TL;DR: In this article, a generative adversarial approach is proposed to learn a sequence model over user-specified transformation functions using GANs, which can make use of arbitrary, non-deterministic transformation functions, and is robust to misspecified user input.
Journal ArticleDOI

Assessment of Convolutional Neural Networks for Automated Classification of Chest Radiographs.

TL;DR: CNNs trained with a modestly sized collection of prospectively labeledchest radiographs achieved high diagnostic performance in the classification of chest radiographs as normal or abnormal; this function may be useful for automated prioritization of abnormal chest radiograph classification.
Journal ArticleDOI

Training Complex Models with Multi-Task Weak Supervision.

TL;DR: This work shows that by solving a matrix completion-style problem, it can recover the accuracies of these multi-task sources given their dependency structure, but without any labeled data, leading to higher-quality supervision for training an end model.