scispace - formally typeset
Search or ask a question
Author

Shubham Jain

Bio: Shubham Jain is an academic researcher from Imperial College London. The author has contributed to research in topics: Computer science & Convolutional neural network. The author has an hindex of 3, co-authored 6 publications receiving 608 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: How far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies is measured, to open the door to highly accurate and fully automatic analysis of cardiac CMRI.
Abstract: Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac magnetic resonance images (multi-slice 2-D cine MRI) is a common clinical task to establish diagnosis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the “Automatic Cardiac Diagnosis Challenge” dataset (ACDC), the largest publicly available and fully annotated dataset for the purpose of cardiac MRI (CMR) assessment. The dataset contains data from 150 multi-equipments CMRI recordings with reference measurements and classification from two medical experts. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies. In the wake of the 2017 MICCAI-ACDC challenge, we report results from deep learning methods provided by nine research groups for the segmentation task and four groups for the classification task. Results show that the best methods faithfully reproduce the expert analysis, leading to a mean value of 0.97 correlation score for the automatic extraction of clinical indices and an accuracy of 0.96 for automatic diagnosis. These results clearly open the door to highly accurate and fully automatic analysis of cardiac CMRI. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for new submissions.

1,056 citations

Book ChapterDOI
10 Sep 2017
TL;DR: In this article, a 2D and 3D segmentation pipeline for fully automated cardiac MR image segmentation using deep convolutional neural networks (CNNs) was developed and trained end-to-end from scratch using the ACD Challenge 2017 dataset comprising of 100 studies.
Abstract: In this paper, we develop a 2D and 3D segmentation pipelines for fully automated cardiac MR image segmentation using Deep Convolutional Neural Networks (CNN). Our models are trained end-to-end from scratch using the ACD Challenge 2017 dataset comprising of 100 studies, each containing Cardiac MR images in End Diastole and End Systole phase. We show that both our segmentation models achieve near state-of-the-art performance scores in terms of distance metrics and have convincing accuracy in terms of clinical parameters. A comparative analysis is provided by introducing a novel dice loss function and its combination with cross entropy loss. By exploring different network structures and comprehensive experiments, we discuss several key insights to obtain optimal model performance, which also is central to the theme of this challenge.

88 citations

Posted Content
TL;DR: In this article, a 2D and 3D segmentation pipeline for fully automated cardiac MR image segmentation using deep convolutional neural networks (CNNs) was developed and trained end-to-end from scratch using the ACD Challenge 2017 dataset comprising of 100 studies.
Abstract: In this paper, we develop a 2D and 3D segmentation pipelines for fully automated cardiac MR image segmentation using Deep Convolutional Neural Networks (CNN). Our models are trained end-to-end from scratch using the ACD Challenge 2017 dataset comprising of 100 studies, each containing Cardiac MR images in End Diastole and End Systole phase. We show that both our segmentation models achieve near state-of-the-art performance scores in terms of distance metrics and have convincing accuracy in terms of clinical parameters. A comparative analysis is provided by introducing a novel dice loss function and its combination with cross entropy loss. By exploring different network structures and comprehensive experiments, we discuss several key insights to obtain optimal model performance, which also is central to the theme of this challenge.

52 citations

Proceedings ArticleDOI
01 Dec 2019
TL;DR: This paper introduces OPAL (for OPen ALgorithms), an open-source, scalable, and privacy-preserving platform for location data that relies on an open algorithm to extract key aggregated statistics from location data for a wide range of potential use cases.
Abstract: Mobile phones and other ubiquitous technologies are generating vast amounts of high-resolution location data. This data has been shown to have a great potential for the public good, e.g. to monitor human migration during crises or to predict the spread of epidemic diseases. Location data is, however, considered one of the most sensitive types of data, and a large body of research has shown the limits of traditional data anonymization methods for big data. Privacy concerns have so far strongly limited the use of location data collected by telcos, especially in developing countries.In this paper, we introduce OPAL (for OPen ALgorithms), an open-source, scalable, and privacy-preserving platform for location data. At its core, OPAL relies on an open algorithm to extract key aggregated statistics from location data for a wide range of potential use cases. We first discuss how we designed the OPAL platform, building a modular and resilient framework for efficient location analytics. We then describe the layered mechanisms we have put in place to protect privacy and discuss the example of a population density algorithm. We finally evaluate the scalability and extensibility of the platform and discuss related work.The code will be open-sourced on GitHub upon publication.

8 citations

Journal ArticleDOI
01 Jan 2023
TL;DR: In this article , a highly heterogeneous and programmable compute-in-memory (CIM) accelerator architecture for deep neural network (DNN) inference is proposed, which combines spatially distributed CIM memory array tiles for weight-stationary, energy-efficient multiply-accumulate (MAC) operations, together with heterogeneous special-function compute cores for auxiliary digital computation.
Abstract: We introduce a highly heterogeneous and programmable compute-in-memory (CIM) accelerator architecture for deep neural network (DNN) inference. This architecture combines spatially distributed CIM memory array “tiles” for weight-stationary, energy-efficient multiply–accumulate (MAC) operations, together with heterogeneous special-function compute cores for auxiliary digital computation. Massively parallel vectors of neuron activation data are exchanged over short distances using a dense and efficient circuit-switched 2-D mesh, offering full end-to-end support for a wide range of DNN workloads, including CNNs, long-short-term-memory (LSTM), and transformers. We discuss the design of the “analog fabric”—the 2-D grid of tiles and compute cores interconnected by the 2-D mesh—and address the efficiency in both mapping of DNNs onto the hardware and in pipelining of various DNN workloads across a range of batch sizes. We show, for the first time, system-level assessments using projected component parameters for a realistic “analog AI” system, based on dense crossbar arrays of low-power nonvolatile analog memory elements, while incorporating a single common analog fabric design that can scale to large networks by introducing data transport between multiple analog AI chips. Our performance estimates for several networks, including large LSTM and bidirectional encoder representations from transformers (BERT), show highly competitive throughput while offering $40\times $ – $140\times $ higher energy efficiency than NVIDIA A100—thus illustrating the strong promise of analog AI and the proposed architecture for DNN inference applications.

4 citations


Cited by
More filters
Reference EntryDOI
15 Oct 2004

2,118 citations

Journal ArticleDOI
TL;DR: nnU-Net as mentioned in this paper is a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task.
Abstract: Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.

2,040 citations

Journal ArticleDOI
TL;DR: How far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies is measured, to open the door to highly accurate and fully automatic analysis of cardiac CMRI.
Abstract: Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac magnetic resonance images (multi-slice 2-D cine MRI) is a common clinical task to establish diagnosis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the “Automatic Cardiac Diagnosis Challenge” dataset (ACDC), the largest publicly available and fully annotated dataset for the purpose of cardiac MRI (CMR) assessment. The dataset contains data from 150 multi-equipments CMRI recordings with reference measurements and classification from two medical experts. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies. In the wake of the 2017 MICCAI-ACDC challenge, we report results from deep learning methods provided by nine research groups for the segmentation task and four groups for the classification task. Results show that the best methods faithfully reproduce the expert analysis, leading to a mean value of 0.97 correlation score for the automatic extraction of clinical indices and an accuracy of 0.96 for automatic diagnosis. These results clearly open the door to highly accurate and fully automatic analysis of cardiac CMRI. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for new submissions.

1,056 citations

Journal ArticleDOI
TL;DR: The results show that deep learning algorithms can accurately identify head CT scan abnormalities requiring urgent attention, opening up the possibility to use these algorithms to automate the triage process.

554 citations

Journal ArticleDOI
TL;DR: An automated analysis method based on a fully convolutional network achieves a performance on par with human experts in analysing CMR images and deriving clinically relevant measures.
Abstract: Cardiovascular resonance (CMR) imaging is a standard imaging modality for assessing cardiovascular diseases (CVDs), the leading cause of death globally. CMR enables accurate quantification of the cardiac chamber volume, ejection fraction and myocardial mass, providing information for diagnosis and monitoring of CVDs. However, for years, clinicians have been relying on manual approaches for CMR image analysis, which is time consuming and prone to subjective errors. It is a major clinical challenge to automatically derive quantitative and clinically relevant information from CMR images. Deep neural networks have shown a great potential in image pattern recognition and segmentation for a variety of tasks. Here we demonstrate an automated analysis method for CMR images, which is based on a fully convolutional network (FCN). The network is trained and evaluated on a large-scale dataset from the UK Biobank, consisting of 4,875 subjects with 93,500 pixelwise annotated images. The performance of the method has been evaluated using a number of technical metrics, including the Dice metric, mean contour distance and Hausdorff distance, as well as clinically relevant measures, including left ventricle (LV) end-diastolic volume (LVEDV) and end-systolic volume (LVESV), LV mass (LVM); right ventricle (RV) end-diastolic volume (RVEDV) and end-systolic volume (RVESV). By combining FCN with a large-scale annotated dataset, the proposed automated method achieves a high performance in segmenting the LV and RV on short-axis CMR images and the left atrium (LA) and right atrium (RA) on long-axis CMR images. On a short-axis image test set of 600 subjects, it achieves an average Dice metric of 0.94 for the LV cavity, 0.88 for the LV myocardium and 0.90 for the RV cavity. The mean absolute difference between automated measurement and manual measurement is 6.1 mL for LVEDV, 5.3 mL for LVESV, 6.9 gram for LVM, 8.5 mL for RVEDV and 7.2 mL for RVESV. On long-axis image test sets, the average Dice metric is 0.93 for the LA cavity (2-chamber view), 0.95 for the LA cavity (4-chamber view) and 0.96 for the RA cavity (4-chamber view). The performance is comparable to human inter-observer variability. We show that an automated method achieves a performance on par with human experts in analysing CMR images and deriving clinically relevant measures.

512 citations