A
Ali Mottaghi
Researcher at Stanford University
Publications - 6
Citations - 322
Ali Mottaghi is an academic researcher from Stanford University. The author has contributed to research in topics: Computer science & Active learning. The author has an hindex of 2, co-authored 3 publications receiving 77 citations.
Papers
More filters
Journal ArticleDOI
Deep learning-enabled medical computer vision.
Andre Esteva,Katherine Chou,Serena Yeung,Nikhil Naik,Ali Madani,Ali Mottaghi,Yun Liu,Eric J. Topol,Jeffrey Dean,Richard Socher +9 more
TL;DR: In this paper, the authors survey recent progress in the development of modern computer vision techniques-powered by deep learning-for medical applications, focusing on medical imaging, medical video, and clinical deployment.
Posted Content
Adversarial Representation Active Learning
Ali Mottaghi,Serena Yeung +1 more
TL;DR: This work demonstrates how it can use recent advances in deep generative models, to outperform the state-of-the-art in achieving the highest classification accuracy using as few labels as possible.
Posted Content
Medical symptom recognition from patient text: An active learning approach for long-tailed multilabel distributions.
TL;DR: An active learning method is introduced that leverages underlying structure of a continually refined, learned latent space to select the most informative examples to label that progressively increases the coverage on the universe of symptoms via the learned model, despite the long tail in data distribution.
Proceedings ArticleDOI
Adaptation of Surgical Activity Recognition Models Across Operating Rooms
TL;DR: A new domain adaptation method is proposed to improve the performance of the surgical activity recognition model in a new operating room for which the authors only have unlabeled videos and extends it to a semi-supervised domain adaptation setting where a small portion of the target domain is also labeled.
Journal Article
An Empirical Study on Activity Recognition in Long Surgical Videos
TL;DR: This paper benchmarks the models performance on a large-scale activity recognition dataset containing over 800 surgery videos captured in multiple clinical operating rooms, and empirically found that Swin-Transformer+BiGRU temporal model yielded strong performance on both datasets.