scispace - formally typeset
Search or ask a question
Author

Qikui Zhu

Bio: Qikui Zhu is an academic researcher from Wuhan University. The author has contributed to research in topics: Image segmentation & Segmentation. The author has an hindex of 8, co-authored 18 publications receiving 319 citations. Previous affiliations of Qikui Zhu include University of Western Ontario & Rensselaer Polytechnic Institute.

Papers
More filters
Proceedings ArticleDOI
14 May 2017
TL;DR: The proposed model can effectively detect the prostate region with additional deeply supervised layers compared with other approaches and significant segmentation accuracy improvement has been achieved by the method compared to other reported approaches.
Abstract: Prostate segmentation from Magnetic Resonance (MR) images plays an important role in image guided intervention. However, the lack of clear boundary specifically at the apex and base, and huge variation of shape and texture between the images from different patients make the task very challenging. To overcome these problems, in this paper, we propose a deeply supervised convolutional neural network (CNN) utilizing the convolutional information to accurately segment the prostate from MR images. The proposed model can effectively detect the prostate region with additional deeply supervised layers compared with other approaches. Since some information will be abandoned after convolution, it is necessary to pass the features extracted from early stages to later stages. The experimental results show that significant segmentation accuracy improvement has been achieved by our proposed method compared to other reported approaches.

180 citations

Journal ArticleDOI
TL;DR: A boundary-weighted domain adaptive neural network (BOWDA-Net) is proposed that is more sensitive to object boundaries and outperformed other state-of-the-art methods for prostate segmentation from magnetic resonance images.
Abstract: Accurate segmentation of the prostate from magnetic resonance (MR) images provides useful information for prostate cancer diagnosis and treatment. However, automated prostate segmentation from 3D MR images faces several challenges. The lack of clear edge between the prostate and other anatomical structures makes it challenging to accurately extract the boundaries. The complex background texture and large variation in size, shape and intensity distribution of the prostate itself make segmentation even further complicated. Recently, as deep learning, especially convolutional neural networks (CNNs), emerging as the best performed methods for medical image segmentation, the difficulty in obtaining large number of annotated medical images for training CNNs has become much more pronounced than ever. Since large-scale dataset is one of the critical components for the success of deep learning, lack of sufficient training data makes it difficult to fully train complex CNNs. To tackle the above challenges, in this paper, we propose a boundary-weighted domain adaptive neural network (BOWDA-Net). To make the network more sensitive to the boundaries during segmentation, a boundary-weighted segmentation loss is proposed. Furthermore, an advanced boundary-weighted transfer leaning approach is introduced to address the problem of small medical imaging datasets. We evaluate our proposed model on three different MR prostate datasets. The experimental results demonstrate that the proposed model is more sensitive to object boundaries and outperformed other state-of-the-art methods.

127 citations

Journal ArticleDOI
TL;DR: In this article, a boundary-weighted domain adaptive neural network (BOWDA-Net) is proposed to make the network more sensitive to the boundaries during segmentation, and an advanced boundary weighted transfer leaning approach is introduced to address the problem of small medical imaging datasets.
Abstract: Accurate segmentation of the prostate from magnetic resonance (MR) images provides useful information for prostate cancer diagnosis and treatment. However, automated prostate segmentation from 3D MR images still faces several challenges. For instance, a lack of clear edge between the prostate and other anatomical structures makes it challenging to accurately extract the boundaries. The complex background texture and large variation in size, shape and intensity distribution of the prostate itself make segmentation even further complicated. With deep learning, especially convolutional neural networks (CNNs), emerging as commonly used methods for medical image segmentation, the difficulty in obtaining large number of annotated medical images for training CNNs has become much more pronounced that ever before. Since large-scale dataset is one of the critical components for the success of deep learning, lack of sufficient training data makes it difficult to fully train complex CNNs. To tackle the above challenges, in this paper, we propose a boundary-weighted domain adaptive neural network (BOWDA-Net). To make the network more sensitive to the boundaries during segmentation, a boundary-weighted segmentation loss (BWL) is proposed. Furthermore, an advanced boundary-weighted transfer leaning approach is introduced to address the problem of small medical imaging datasets. We evaluate our proposed model on the publicly available MICCAI 2012 Prostate MR Image Segmentation (PROMISE12) challenge dataset. Our experimental results demonstrate that the proposed model is more sensitive to boundary information and outperformed other state-of-the-art methods.

53 citations

Journal ArticleDOI
TL;DR: A deep neural network with bidirectional convolutional recurrent layers for MRI prostate image segmentation that treats prostate slices as a data sequence and utilizes the interslice contexts to assist segmentation.
Abstract: Segmentation of the prostate from Magnetic Resonance Imaging (MRI) plays an important role in prostate cancer diagnosis. However, the lack of clear boundary and significant variation of prostate shapes and appearances make the automatic segmentation very challenging. In the past several years, approaches based on deep learning technology have made significant progress on prostate segmentation. However, those approaches mainly paid attention to features and contexts within each single slice of a 3D volume. As a result, this kind of approaches faces many difficulties when segmenting the base and apex of the prostate due to the limited slice boundary information. To tackle this problem, in this paper, we propose a deep neural network with bidirectional convolutional recurrent layers for MRI prostate image segmentation. In addition to utilizing the intraslice contexts and features, the proposed model also treats prostate slices as a data sequence and utilizes the interslice contexts to assist segmentation. The experimental results show that the proposed approach achieved significant segmentation improvement compared to other reported methods.

48 citations

Proceedings ArticleDOI
09 Jul 2020
TL;DR: An innovative Dual-Scheme Fusion Network (DSFN) for unsupervised domain adaptation is proposed, which helps reduce the domain gap to further improve the network performance and achieve significant performance improvement over other state-of-the-art domain adaptation methods.
Abstract: Domain adaptation aims to alleviate the problem of retraining a pre-trained model when applying it to a different domain, which requires large amount of additional training data of the target domain. Such an objective is usually achieved by establishing connections between the source domain labels and target domain data. However, this imbalanced source-to-target one way pass may not eliminate the domain gap, which limits the performance of the pre-trained model. In this paper, we propose an innovative Dual-Scheme Fusion Network (DSFN) for unsupervised domain adaptation. By building both source-to-target and target-to-source connections, this balanced joint information flow helps reduce the domain gap to further improve the network performance. The mechanism is further applied to the inference stage, where both the original input target image and the generated source images are segmented with the proposed joint network. The results are fused to obtain more robust segmentation. Extensive experiments of unsupervised cross-modality medical image segmentation are conducted on two tasks – brain tumor segmentation and cardiac structures segmentation. The experimental results show that our method achieved significant performance improvement over other state-of-the-art domain adaptation methods.

29 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: UNet++ as mentioned in this paper proposes an efficient ensemble of U-Nets of varying depths, which partially share an encoder and co-learn simultaneously using deep supervision, leading to a highly flexible feature fusion scheme.
Abstract: The state-of-the-art models for medical image segmentation are variants of U-Net and fully convolutional networks (FCN). Despite their success, these models have two limitations: (1) their optimal depth is apriori unknown, requiring extensive architecture search or inefficient ensemble of models of varying depths; and (2) their skip connections impose an unnecessarily restrictive fusion scheme, forcing aggregation only at the same-scale feature maps of the encoder and decoder sub-networks. To overcome these two limitations, we propose UNet++, a new neural architecture for semantic and instance segmentation, by (1) alleviating the unknown network depth with an efficient ensemble of U-Nets of varying depths, which partially share an encoder and co-learn simultaneously using deep supervision; (2) redesigning skip connections to aggregate features of varying semantic scales at the decoder sub-networks, leading to a highly flexible feature fusion scheme; and (3) devising a pruning scheme to accelerate the inference speed of UNet++. We have evaluated UNet++ using six different medical image segmentation datasets, covering multiple imaging modalities such as computed tomography (CT), magnetic resonance imaging (MRI), and electron microscopy (EM), and demonstrating that (1) UNet++ consistently outperforms the baseline models for the task of semantic segmentation across different datasets and backbone architectures; (2) UNet++ enhances segmentation quality of varying-size objects—an improvement over the fixed-depth U-Net; (3) Mask RCNN++ (Mask R-CNN with UNet++ design) outperforms the original Mask R-CNN for the task of instance segmentation; and (4) pruned UNet++ models achieve significant speedup while showing only modest performance degradation. Our implementation and pre-trained models are available at https://github.com/MrGiovanni/UNetPlusPlus .

1,487 citations

Journal ArticleDOI
TL;DR: Medical imaging systems: Physical principles and image reconstruction algorithms for magnetic resonance tomography, ultrasound and computer tomography (CT), and applications: Image enhancement, image registration, functional magnetic resonance imaging (fMRI).

536 citations

Journal ArticleDOI
TL;DR: An overview of deep learning and its applications to healthcare found in the last decade is provided and three use cases in China, Korea, and Canada are presented to show deep learning applications for COVID-19 medical image processing.

282 citations