scispace - formally typeset
Search or ask a question
Author

Akshay S. Chaudhari

Bio: Akshay S. Chaudhari is an academic researcher from Stanford University. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 14, co-authored 54 publications receiving 604 citations. Previous affiliations of Akshay S. Chaudhari include University of California, San Diego.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper , the authors focus on practical issues of increasing interest by highlighting three hot topics fundamental to understanding sarcopenia in older adults: definitions and terminology, current diagnostic imaging techniques, and the emerging role of opportunistic computed tomography.

4 citations

Posted Content
TL;DR: In this paper, the robustness of deep learning-based image reconstruction methods is investigated. And the authors find that both trained and un-trained methods are vulnerable to adversarial perturbations.
Abstract: Deep neural networks give state-of-the-art accuracy for reconstructing images from few and noisy measurements, a problem arising for example in accelerated magnetic resonance imaging (MRI). However, recent works have raised concerns that deep-learning-based image reconstruction methods are sensitive to perturbations and are less robust than traditional methods: Neural networks (i) may be sensitive to small, yet adversarially-selected perturbations, (ii) may perform poorly under distribution shifts, and (iii) may fail to recover small but important features in an image. In order to understand the sensitivity to such perturbations, in this work, we measure the robustness of different approaches for image reconstruction including trained and un-trained neural networks as well as traditional sparsity-based methods. We find, contrary to prior works, that both trained and un-trained methods are vulnerable to adversarial perturbations. Moreover, both trained and un-trained methods tuned for a particular dataset suffer very similarly from distribution shifts. Finally, we demonstrate that an image reconstruction method that achieves higher reconstruction quality, also performs better in terms of accurately recovering fine details. Our results indicate that the state-of-the-art deep-learning-based image reconstruction methods provide improved performance than traditional methods without compromising robustness.

4 citations

Journal ArticleDOI
TL;DR: In this article, the authors reviewed how AI/ML can be applied to improve upstream components of the imaging pipeline, including exam modality selection, hardware design, exam protocol selection, data acquisition, image reconstruction, and image processing.

4 citations

Journal ArticleDOI
TL;DR: In this article, a 2-dimensional U-Net with varying contraction layers and different convolutional filters was designed to estimate the specific absorption rate (SAR) distribution in realistic body models.
Abstract: The purpose of this study is to investigate feasibility of estimating the specific absorption rate (SAR) in MRI in real time. To this goal, SAR maps are predicted from 3T- and 7T-simulated magnetic resonance (MR) images in 10 realistic human body models via a convolutional neural network. Two-dimensional (2-D) U-Net architectures with varying contraction layers and different convolutional filters were designed to estimate the SAR distribution in realistic body models. Sim4Life (ZMT, Switzerland) was used to create simulated anatomical images and SAR maps at 3T and 7T imaging frequencies for Duke, Ella, Charlie, and Pregnant Women (at 3, 7, and 9 month gestational stages) body models. Mean squared error (MSE) was used as the cost function and the structural similarity index (SSIM) was reported. A 2-D U-Net with 4 contracting (and 4 expanding) layers and 64 convolutional filters at the initial stage showed the best compromise to estimate SAR distributions. Adam optimizer outperformed stochastic gradient descent (SGD) for all cases with an average SSIM of $90.5 \mp 3.6$ % and an average MSE of $0.7 \mp 0.6$ % for head images at 7T, and an SSIM of > $85.1 \mp 6.2$ % and an MSE of $0.4 \mp 0.4$ % for 3T body imaging. Algorithms estimated the SAR maps for $224\times 224$ slices under 30 ms. The proposed methodology shows promise to predict real-time SAR in clinical imaging settings without using extra mapping techniques or patient-specific calibrations.

4 citations

10 Feb 2021
TL;DR: In this paper, a simple autoencoder and gradient update (Latent Shift) is proposed to transform the latent representation of a specific input image to exaggerate or curtail the features used for prediction.
Abstract: Motivation: Traditional image attribution methods struggle to satisfactorily explain predictions of neural networks. Prediction explanation is important, especially in medical imaging, for avoiding the unintended consequences of deploying AI systems when false positive predictions can impact patient care. Thus, there is a pressing need to develop improved models for model explainability and introspection. Specific problem: A new approach is to transform input images to increase or decrease features which cause the prediction. However, current approaches are difficult to implement as they are monolithic or rely on GANs. These hurdles prevent wide adoption. Our approach: Given an arbitrary classifier, we propose a simple autoencoder and gradient update (Latent Shift) that can transform the latent representation of a specific input image to exaggerate or curtail the features used for prediction. We use this method to study chest X-ray classifiers and evaluate their performance. We conduct a reader study with two radiologists assessing 240 chest X-ray predictions to identify which ones are false positives (half are) using traditional attribution maps or our proposed method. Results: We found low overlap with ground truth pathology masks for models with reasonably high accuracy. However, the results from our reader study indicate that these models are generally looking at the correct features. We also found that the Latent Shift explanation allows a user to have more confidence in true positive predictions compared to traditional approaches (0.15$\pm$0.95 in a 5 point scale with p=0.01) with only a small increase in false positive predictions (0.04$\pm$1.06 with p=0.57). Accompanying webpage: this https URL Source code: this https URL

4 citations


Cited by
More filters
Reference EntryDOI
15 Oct 2004

2,118 citations

Journal ArticleDOI
TL;DR: Improve some of the bonded terms in the Martini protein force field that lead to a more realistic length of α-helices and to improved numerical stability for polyalanine and glycine repeats.
Abstract: The Martini coarse-grained force field has been successfully used for simulating a wide range of (bio)molecular systems. Recent progress in our ability to test the model against fully atomistic force fields, however, has revealed some shortcomings. Most notable, phenylalanine and proline were too hydrophobic, and dimers formed by polar residues in apolar solvents did not bind strongly enough. Here, we reparametrize these residues either through reassignment of particle types or by introducing embedded charges. The new parameters are tested with respect to partitioning across a lipid bilayer, membrane binding of Wimley–White peptides, and dimerization free energy in solvents of different polarity. In addition, we improve some of the bonded terms in the Martini protein force field that lead to a more realistic length of α-helices and to improved numerical stability for polyalanine and glycine repeats. The new parameter set is denoted Martini version 2.2.

1,112 citations

Journal ArticleDOI
TL;DR: The Martini model, a coarse-grained force field for biomolecular simulations, has found a broad range of applications since its release a decade ago and is described as a building block principle model that combines speed and versatility while maintaining chemical specificity.
Abstract: The Martini model, a coarse-grained force field for biomolecular simulations, has found a broad range of applications since its release a decade ago. Based on a building block principle, the model combines speed and versatility while maintaining chemical specificity. Here we review the current state of the model. We describe recent highlights as well as shortcomings, and our ideas on the further development of the model.

1,022 citations