scispace - formally typeset
Search or ask a question
Author

Chandni Gupta

Bio: Chandni Gupta is an academic researcher from King's College London. The author has contributed to research in topics: Imaging phantom & Transformation (function). The author has an hindex of 4, co-authored 9 publications receiving 87 citations.

Papers
More filters
Book ChapterDOI
16 Sep 2018
TL;DR: This work proposes a new Patch-based Iterative Network (PIN), a multitask learning framework that combines regression and classification to improve localisation accuracy in 3D medical volumes and extends PIN to localise multiple landmarks by using principal component analysis, which models the global anatomical relationships between landmarks.
Abstract: We propose a new Patch-based Iterative Network (PIN) for fast and accurate landmark localisation in 3D medical volumes. PIN utilises a Convolutional Neural Network (CNN) to learn the spatial relationship between an image patch and anatomical landmark positions. During inference, patches are repeatedly passed to the CNN until the estimated landmark position converges to the true landmark location. PIN is computationally efficient since the inference stage only selectively samples a small number of patches in an iterative fashion rather than a dense sampling at every location in the volume. Our approach adopts a multi-task learning framework that combines regression and classification to improve localisation accuracy. We extend PIN to localise multiple landmarks by using principal component analysis, which models the global anatomical relationships between landmarks. We have evaluated PIN using 72 3D ultrasound images from fetal screening examinations. PIN achieves quantitatively an average landmark localisation error of 5.59 mm and a runtime of 0.44 s to predict 10 landmarks per volume. Qualitatively, anatomical 2D standard scan planes derived from the predicted landmark locations are visually similar to the clinical ground truth.

38 citations

Book ChapterDOI
TL;DR: This work proposes a new Iterative Transformation Network (ITN) for the automatic detection of standard planes in 3D volumes and introduces additional classification probability outputs to the network to act as confidence measures for the regressed transformation parameters in order to further improve the localisation accuracy.
Abstract: Standard scan plane detection in fetal brain ultrasound (US) forms a crucial step in the assessment of fetal development. In clinical settings, this is done by manually manoeuvring a 2D probe to the desired scan plane. With the advent of 3D US, the entire fetal brain volume containing these standard planes can be easily acquired. However, manual standard plane identification in 3D volume is labour-intensive and requires expert knowledge of fetal anatomy. We propose a new Iterative Transformation Network (ITN) for the automatic detection of standard planes in 3D volumes. ITN uses a convolutional neural network to learn the relationship between a 2D plane image and the transformation parameters required to move that plane towards the location/orientation of the standard plane in the 3D volume. During inference, the current plane image is passed iteratively to the network until it converges to the standard plane location. We explore the effect of using different transformation representations as regression outputs of ITN. Under a multi-task learning framework, we introduce additional classification probability outputs to the network to act as confidence measures for the regressed transformation parameters in order to further improve the localisation accuracy. When evaluated on 72 US volumes of fetal brain, our method achieves an error of 3.83mm/12.7 degrees and 3.80mm/12.6 degrees for the transventricular and transcerebellar planes respectively and takes 0.46s per plane. Source code is publicly available at this https URL.

36 citations

Book ChapterDOI
16 Sep 2018
TL;DR: In this article, an Iterative Transformation Network (ITN) was proposed to detect standard scan planes in 3D volumes of fetal brain ultrasound. But the standard plane detection in 3-D volume is a labour-intensive task and requires expert knowledge of fetal anatomy.
Abstract: Standard scan plane detection in fetal brain ultrasound (US) forms a crucial step in the assessment of fetal development. In clinical settings, this is done by manually manoeuvring a 2D probe to the desired scan plane. With the advent of 3D US, the entire fetal brain volume containing these standard planes can be easily acquired. However, manual standard plane identification in 3D volume is labour-intensive and requires expert knowledge of fetal anatomy. We propose a new Iterative Transformation Network (ITN) for the automatic detection of standard planes in 3D volumes. ITN uses a convolutional neural network to learn the relationship between a 2D plane image and the transformation parameters required to move that plane towards the location/orientation of the standard plane in the 3D volume. During inference, the current plane image is passed iteratively to the network until it converges to the standard plane location. We explore the effect of using different transformation representations as regression outputs of ITN. Under a multi-task learning framework, we introduce additional classification probability outputs to the network to act as confidence measures for the regressed transformation parameters in order to further improve the localisation accuracy. When evaluated on 72 US volumes of fetal brain, our method achieves an error of 3.83 mm/12.7\(^{\circ }\) and 3.80 mm/12.6\(^{\circ }\) for the transventricular and transcerebellar planes respectively and takes 0.46 s per plane.

32 citations

Proceedings ArticleDOI
04 Apr 2018
TL;DR: A two-stage convolutional neural network able to incorporate additional contextual and structural information into the segmentation process in fetal 3DUS, significantly outperforming traditional 2D biometrics.
Abstract: 2D ultrasound (US) is still the preferred imaging method for fetal screening. However, 2D biometrics are significantly affected by the inter/intra-observer variability and operator dependence of a traditionally manual procedure. 3DUS is an alternative emerging modality with the potential to alleviate many of these problems. This paper presents a new automatic framework for skull segmentation in fetal 3DUS. We propose a two-stage convolutional neural network (CNN) able to incorporate additional contextual and structural information into the segmentation process. In the first stage of the CNN, a partial reconstruction of the skull is obtained, segmenting only those regions visible in the original US volume. From this initial segmentation, two additional channels of information are computed inspired by the underlying physics of US image acquisition: an angle incidence map and a shadow casting map. These additional information channels are combined in the second stage of the CNN to provide a complete segmentation of the skull, able to compensate for the fading and shadowing artefacts observed in the original US image. The performance of the new segmentation architecture was evaluated on a dataset of 66 cases, obtaining an average Dice coefficient of 0.83 ± 0.06. Finally, we also evaluated the clinical potential of the new 3DUS-based analysis framework for the assessment of cranial deformation, significantly outperforming traditional 2D biometrics (100% vs. 50% specificity, respectively).

28 citations

Book ChapterDOI
TL;DR: In this article, a patch-based iterative network (PIN) is proposed for fast and accurate landmark localisation in 3D medical volumes, where patches are repeatedly passed to the CNN until the estimated landmark position converges to the true landmark location.
Abstract: We propose a new Patch-based Iterative Network (PIN) for fast and accurate landmark localisation in 3D medical volumes. PIN utilises a Convolutional Neural Network (CNN) to learn the spatial relationship between an image patch and anatomical landmark positions. During inference, patches are repeatedly passed to the CNN until the estimated landmark position converges to the true landmark location. PIN is computationally efficient since the inference stage only selectively samples a small number of patches in an iterative fashion rather than a dense sampling at every location in the volume. Our approach adopts a multi-task learning framework that combines regression and classification to improve localisation accuracy. We extend PIN to localise multiple landmarks by using principal component analysis, which models the global anatomical relationships between landmarks. We have evaluated PIN using 72 3D ultrasound images from fetal screening examinations. PIN achieves quantitatively an average landmark localisation error of 5.59mm and a runtime of 0.44s to predict 10 landmarks per volume. Qualitatively, anatomical 2D standard scan planes derived from the predicted landmark locations are visually similar to the clinical ground truth. Source code is publicly available at this https URL.

11 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Novel deep reinforcement learning (RL) strategies to train agents that can precisely and robustly localize target landmarks in medical scans are evaluated and the performance of these agents surpasses state‐of‐the‐art supervised and RL methods.

126 citations

Proceedings ArticleDOI
01 Jul 2019
TL;DR: In this paper, a multi-task deep convolutional neural network is proposed for automatic segmentation and estimation of fetal head ellipse by minimizing a compound cost function composed of segmentation dice score and MSE of ellipsse parameters.
Abstract: Ultrasound imaging is a standard examination during pregnancy that can be used for measuring specific biometric parameters towards prenatal diagnosis and estimating gestational age. Fetal head circumference (HC) is one of the significant factors to determine the fetus growth and health. In this paper, a multi-task deep convolutional neural network is proposed for automatic segmentation and estimation of HC ellipse by minimizing a compound cost function composed of segmentation dice score and MSE of ellipse parameters. Experimental results on fetus ultrasound dataset in different trimesters of pregnancy show that the segmentation results and the extracted HC match well with the radiologist annotations. The obtained dice scores of the fetal head segmentation and the accuracy of HC evaluations are comparable to the state-of-the-art.

77 citations

Journal ArticleDOI
TL;DR: In this article, a global-to-local localization approach using fully convolutional neural networks (FCNNs) was proposed to automatically localize anatomical landmarks in medical images.
Abstract: In this study, we propose a fast and accurate method to automatically localize anatomical landmarks in medical images. We employ a global-to-local localization approach using fully convolutional neural networks (FCNNs). First, a global FCNN localizes multiple landmarks through the analysis of image patches, performing regression and classification simultaneously. In regression, displacement vectors pointing from the center of image patches towards landmark locations are determined. In classification, presence of landmarks of interest in the patch is established. Global landmark locations are obtained by averaging the predicted displacement vectors, where the contribution of each displacement vector is weighted by the posterior classification probability of the patch that it is pointing from. Subsequently, for each landmark localized with global localization, local analysis is performed. Specialized FCNNs refine the global landmark locations by analyzing local sub-images in a similar manner, i.e. by performing regression and classification simultaneously and combining the results. Evaluation was performed through localization of 8 anatomical landmarks in CCTA scans, 2 landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We demonstrate that the method performs similarly to a second observer and is able to localize landmarks in a diverse set of medical images, differing in image modality, image dimensionality, and anatomical coverage.

75 citations

Journal ArticleDOI
TL;DR: This review covers state‐of‐the‐art segmentation and classification methodologies for the whole fetus and, more specifically, the fetal brain, lungs, liver, heart and placenta in magnetic resonance imaging and (3D) ultrasound for the first time.

70 citations

Journal ArticleDOI
TL;DR: Results indicate for the first time that computational models have similar performance compared to humans when classifying common planes in human fetal examination, however, the dataset leaves the door open on future research to further improve results, especially on fine-grained plane categorization.
Abstract: The goal of this study was to evaluate the maturity of current Deep Learning classification techniques for their application in a real maternal-fetal clinical environment. A large dataset of routinely acquired maternal-fetal screening ultrasound images (which will be made publicly available) was collected from two different hospitals by several operators and ultrasound machines. All images were manually labeled by an expert maternal fetal clinician. Images were divided into 6 classes: four of the most widely used fetal anatomical planes (Abdomen, Brain, Femur and Thorax), the mother’s cervix (widely used for prematurity screening) and a general category to include any other less common image plane. Fetal brain images were further categorized into the 3 most common fetal brain planes (Trans-thalamic, Trans-cerebellum, Trans-ventricular) to judge fine grain categorization performance. The final dataset is comprised of over 12,400 images from 1,792 patients, making it the largest ultrasound dataset to date. We then evaluated a wide variety of state-of-the-art deep Convolutional Neural Networks on this dataset and analyzed results in depth, comparing the computational models to research technicians, which are the ones currently performing the task daily. Results indicate for the first time that computational models have similar performance compared to humans when classifying common planes in human fetal examination. However, the dataset leaves the door open on future research to further improve results, especially on fine-grained plane categorization.

67 citations