scispace - formally typeset
Search or ask a question
Author

Seyed Raein Hashemi

Other affiliations: Boston Children's Hospital
Bio: Seyed Raein Hashemi is an academic researcher from Brigham and Women's Hospital. The author has contributed to research in topics: Image segmentation & Deep learning. The author has an hindex of 6, co-authored 12 publications receiving 232 citations. Previous affiliations of Seyed Raein Hashemi include Boston Children's Hospital.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper developed a 3D fully convolutional densely connected network (FC-DenseNet) with large overlapping image patches as input and an asymmetric similarity loss layer based on Tversky index, which led to the lowest surface distance and the best lesion true positive rate.
Abstract: Fully convolutional deep neural networks have been asserted to be fast and precise frameworks with great potential in image segmentation. One of the major challenges in training such networks raises when the data are unbalanced, which is common in many medical imaging applications, such as lesion segmentation, where lesion class voxels are often much lower in numbers than non-lesion voxels. A trained network with unbalanced data may make predictions with high precision and low recall, being severely biased toward the non-lesion class which is particularly undesired in most medical applications where false negatives are actually more important than false positives. Various methods have been proposed to address this problem, including two-step training, sample re-weighting, balanced sampling, and more recently, similarity loss functions and focal loss. In this paper, we fully trained convolutional deep neural networks using an asymmetric similarity loss function to mitigate the issue of data imbalance and achieve much better tradeoff between precision and recall. To this end, we developed a 3D fully convolutional densely connected network (FC-DenseNet) with large overlapping image patches as input and an asymmetric similarity loss layer based on Tversky index (using $F_\beta $ scores). We used large overlapping image patches as inputs for intrinsic and extrinsic data augmentation, a patch selection algorithm, and a patch prediction fusion strategy using B-spline weighted soft voting to account for the uncertainty of prediction in patch borders. We applied this method to multiple sclerosis (MS) lesion segmentation based on two different datasets of MSSEG 2016 and ISBI longitudinal MS lesion segmentation challenge, where we achieved average Dice similarity coefficients of 69.9% and 65.74%, respectively, achieving top performance in both the challenges. We compared the performance of our network trained with $F_\beta $ loss, focal loss, and generalized Dice loss functions. Through September 2018, our network trained with focal loss ranked first according to the ISBI challenge overall score and resulted in the lowest reported lesion false positive rate among all submitted methods. Our network trained with the asymmetric similarity loss led to the lowest surface distance and the best lesion true positive rate that is arguably the most important performance metric in a clinical decision support system for lesion detection. The asymmetric similarity loss function based on $F_\beta $ scores allows training networks that make a better balance between precision and recall in highly unbalanced image segmentation. We achieved superior performance in MS lesion segmentation using a patch-wise 3D FC-DenseNet with a patch prediction fusion strategy, trained with asymmetric similarity loss functions.

145 citations

Proceedings ArticleDOI
TL;DR: In this paper, a 2D U-Net and autocontext based segmentation method was proposed to segment the fetal brain in real-time while the fetal MRI slices are being acquired.
Abstract: Brain segmentation is a fundamental first step in neuroimage analysis. In the case of fetal MRI, it is particularly challenging and important due to the arbitrary orientation of the fetus, organs that surround the fetal head, and intermittent fetal motion. Several promising methods have been proposed but are limited in their performance in challenging cases and in real-time segmentation. We aimed to develop a fully automatic segmentation method that independently segments sections of the fetal brain in 2D fetal MRI slices in real-time. To this end, we developed and evaluated a deep fully convolutional neural network based on 2D U-net and autocontext, and compared it to two alternative fast methods based on 1) a voxelwise fully convolutional network and 2) a method based on SIFT features, random forest and conditional random field. We trained the networks with manual brain masks on 250 stacks of training images, and tested on 17 stacks of normal fetal brain images as well as 18 stacks of extremely challenging cases based on extreme motion, noise, and severely abnormal brain shape. Experimental results show that our U-net approach outperformed the other methods and achieved average Dice metrics of 96.52% and 78.83% in the normal and challenging test sets, respectively. With an unprecedented performance and a test run time of about 1 second, our network can be used to segment the fetal brain in real-time while fetal MRI slices are being acquired. This can enable real-time motion tracking, motion detection, and 3D reconstruction of fetal brain MRI.

64 citations

Proceedings ArticleDOI
04 Apr 2018
TL;DR: A deep fully convolutional neural network based on 2D U-net and autocontext that can be used to segment the fetal brain in real-time while fetal MRI slices are being acquired and can enable real- time motion tracking, motion detection, and 3D reconstruction of fetal brain MRI.
Abstract: Brain segmentation is a fundamental first step in neuroimage analysis. In the case of fetal MRI, it is particularly challenging and important due to the arbitrary orientation of the fetus, organs that surround the fetal head, and intermittent fetal motion. Several promising methods have been proposed but are limited in their performance in challenging cases and in realtime segmentation. We aimed to develop a fully automatic segmentation method that independently segments sections of the fetal brain in 2D fetal MRI slices in real-time. To this end, we developed and evaluated a deep fully convolutional neural network based on 2D U-net and autocontext, and compared it to two alternative fast methods based on 1) a voxelwise fully convolutional network and 2) a method based on SIFT features, random forest and conditional random field. We trained the networks with manual brain masks on 250 stacks of training images, and tested on 17 stacks of normal fetal brain images as well as 18 stacks of extremely challenging cases based on extreme motion, noise, and severely abnormal brain shape. Experimental results show that our U-net approach outperformed the other methods and achieved average Dice metrics of 96.52% and 78.83% in the normal and challenging test sets, respectively. With an unprecedented performance and a test run time of about 1 second, our network can be used to segment the fetal brain in real-time while fetal MRI slices are being acquired. This can enable real-time motion tracking, motion detection, and 3D reconstruction of fetal brain MRI.

63 citations

Posted Content
28 Mar 2018
TL;DR: This paper proposes Tversky loss function as a generalization of the Dice similarity coefficient and Fβ scores to effectively train deep neural networks and proposes a patch prediction fusion strategy based on B-spline weighted soft voting to take into account the uncertainty of prediction in patch borders.
Abstract: Fully convolutional deep neural networks have been asserted to be fast and precise frameworks with great potential in image segmentation. One of the major challenges in utilizing such networks is data imbalance, which is especially restraining in medical imaging applications such as lesion segmentation where lesion class voxels are often much less than non-lesion voxels. A trained network with unbalanced data may make predictions toward high precision and low recall (sensitivity), being severely biased towards the non-lesion class which is particularly undesired in medical applications where false negatives are actually more important than false positives. Several methods have been proposed to deal with this problem including balanced sampling, two step training, sample re-weighting, and similarity loss functions. In this paper, we propose a generalized loss function based on the Tversky index to mitigate the issue of data imbalance and achieve much better trade-off between precision and recall in training 3D fully convolutional deep neural networks. Moreover, we extend our preliminary work on using Tversky loss function for U-net to a patch-wise 3D densely connected network, where we use overlapping image patches for intrinsic and extrinsic data augmentation. To this end, we propose a patch prediction fusion strategy based on B-spline weighted soft voting to take into account the uncertainty of prediction in patch borders. The lesion segmentation results obtained from our patch-wise 3D densely connected network are superior to the recently reported results in the literature on multiple sclerosis lesion segmentation on magnetic resonance imaging dataset, namely MSSEG 2016, in which we obtained average Dice coefficient of 69.8%. Significant improvement in F1 and F2 scores and the area under the precision-recall curve was achieved in test using the Tversky loss layer and via our 3D patch prediction fusion method. Based on these results we suggest Tversky loss function as a generalization of the Dice similarity coefficient and Fβ scores to effectively train deep neural networks.

37 citations

Posted Content
28 Mar 2018
TL;DR: A patch-wise 3D densely connected network with an asymmetric loss function, where large overlapping image patches for intrinsic and extrinsic data augmentation, a patch selection algorithm, and a patch prediction fusion strategy based on B-spline weighted soft voting to take into account the uncertainty of prediction in patch borders are developed.
Abstract: Fully convolutional deep neural networks have been asserted to be fast and precise frameworks with great potential in image segmentation. One of the major challenges in utilizing such networks raises when data is unbalanced, which is common in many medical imaging applications such as lesion segmentation where lesion class voxels are often much lower in numbers than non-lesion voxels. A trained network with unbalanced data may make predictions with high precision and low recall, being severely biased towards the non-lesion class which is particularly undesired in medical applications where false negatives are actually more important than false positives. Various methods have been proposed to address this problem including two step training, sample re-weighting, balanced sampling, and similarity loss functions. In this paper we developed a patch-wise 3D densely connected network with an asymmetric loss function, where we used large overlapping image patches for intrinsic and extrinsic data augmentation, a patch selection algorithm, and a patch prediction fusion strategy based on B-spline weighted soft voting to take into account the uncertainty of prediction in patch borders. We applied this method to lesion segmentation based on the MSSEG 2016 and ISBI 2015 challenges, where we achieved average Dice similarity coefficient of 69.8% and 65.74%, respectively, using our proposed patch-wise 3D densely connected network. Significant improvement in $F_1$ and $F_2$ scores and the area under the precision-recall curve was achieved in test using the asymmetric similarity loss layer and our 3D patch prediction fusion method. The asymmetric similarity loss function based on $F_\beta$ scores generalizes the Dice similarity coefficient and can be effectively used with the patch-wise strategy developed here to train fully convolutional deep neural networks for highly unbalanced image segmentation.

15 citations


Cited by
More filters
Proceedings ArticleDOI
08 Apr 2019
TL;DR: In this article, a generalized focal loss function based on the Tversky index was proposed to address the issue of data imbalance in medical image segmentation, which achieved a better trade off between precision and recall when training on small structures such as lesions.
Abstract: We propose a generalized focal loss function based on the Tversky index to address the issue of data imbalance in medical image segmentation. Compared to the commonly used Dice loss, our loss function achieves a better trade off between precision and recall when training on small structures such as lesions. To evaluate our loss function, we improve the attention U-Net model by incorporating an image pyramid to preserve contextual features. We experiment on the BUS 2017 dataset and ISIC 2018 dataset where lesions occupy 4.84% and 21.4% of the images area and improve segmentation accuracy when compared to the standard U-Net by 25.7% and 3.6%, respectively.

515 citations

Proceedings ArticleDOI
27 Oct 2020
TL;DR: A new log-cosh dice loss function is introduced and it is showcased that certain loss functions perform well across all data-sets and can be taken as a good baseline choice in unknown data distribution scenarios.
Abstract: Image Segmentation has been an active field of research as it has a wide range of applications, ranging from automated disease detection to self driving cars. In the past five years, various papers came up with different objective loss functions used in different cases such as biased data, sparse segmentation, etc. In this paper, we have summarized some of the well-known loss functions widely used for Image Segmentation and listed out the cases where their usage can help in fast and better convergence of a model. Furthermore, we have also introduced a new log-cosh dice loss function and compared its performance on NBFS skull-segmentation open source data-set with widely used loss functions. We also showcased that certain loss functions perform well across all data-sets and can be taken as a good baseline choice in unknown data distribution scenarios.

480 citations

Journal ArticleDOI
TL;DR: In this article, a test-time augmentation-based aleatoric uncertainty was proposed to analyze the effect of different transformations of the input image on the segmentation output, and the results showed that the proposed test augmentation provides a better uncertainty estimation than calculating the testtime dropout-based model uncertainty alone and helps to reduce overconfident incorrect predictions.

305 citations

Journal ArticleDOI
TL;DR: In this article, a review of the state-of-the-art in handling label noise in deep learning for medical image analysis is presented, where the authors conducted experiments with three medical imaging datasets with different types of label noise, where they investigated several existing strategies and developed new methods to combat the negative effect of labels.

279 citations