scispace - formally typeset
Search or ask a question
Author

Zheng Li

Bio: Zheng Li is an academic researcher from Shanghai University. The author has contributed to research in topics: Computer science & Medicine. The author has an hindex of 2, co-authored 7 publications receiving 62 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A progressive wide residual network with a fixed skip connection (named FSCWRN) based SR algorithm is proposed to reconstruct MR images, which combines the global residual learning and the shallow network based local residual learning.
Abstract: Spatial resolution is a critical imaging parameter in magnetic resonance imaging. The image super-resolution (SR) is an effective and cost efficient alternative technique to improve the spatial resolution of MR images. Over the past several years, the convolutional neural networks (CNN)-based SR methods have achieved state-of-the-art performance. However, CNNs with very deep network structures usually suffer from the problems of degradation and diminishing feature reuse, which add difficulty to network training and degenerate the transmission capability of details for SR. To address these problems, in this work, a progressive wide residual network with a fixed skip connection (named FSCWRN) based SR algorithm is proposed to reconstruct MR images, which combines the global residual learning and the shallow network based local residual learning. The strategy of progressive wide networks is adopted to replace deeper networks, which can partially relax the above-mentioned problems, while a fixed skip connection helps provide rich local details at high frequencies from a fixed shallow layer network to subsequent networks. The experimental results on one simulated MR image database and three real MR image databases show the effectiveness of the proposed FSCWRN SR algorithm, which achieves improved reconstruction performance compared with other algorithms.

88 citations

Book ChapterDOI
13 Oct 2019
TL;DR: A novel two-stage multi-loss super-resolution (SR) network (TSMLSRNet) is proposed for reconstruction of high resolution ASL images that outperforms state-of-the-art image reconstruction algorithms and the noise in AsL images is simultaneously reduced.
Abstract: Arterial spin labeling (ASL) perfusion magnetic resonance imaging (MRI) is a non-invasive technique for quantifying cerebral blood flow (CBF). Limited by the T1 decay rate of the labeled spins, very short time is available for data acquisition after one spin labeling cycle, resulting in a low spatial resolution. The traditional strategy to achieve high spatial resolution in ASL MRI is to add more labeling cycles. However, the total acquisition time is exponentially prolonged, making it highly sensitive to motions. Moreover, signal-to-noise-ratio (SNR) drops as spatial resolution increases. There needs an alternative approach to improve spatial resolution and SNR for ASL MRI without increasing scan time. Therefore, we propose a novel two-stage multi-loss super-resolution (SR) network (TSMLSRNet) for reconstruction of high resolution ASL images. Specifically, the first stage network uses the mean squared error (MSE) loss function to produce a first SR estimate, while the second stage network adopts the gradient sensitive (GS) loss function to further improve high-frequency details for the output SR image. The multi-loss joint training strategy is finally used to preserve both the low-frequency and high-frequency information of the ASL images. Moreover, the noise in ASL images is simultaneously reduced. Validation results using in-vivo data clearly show the effectiveness of the proposed ASL SR algorithm that outperforms state-of-the-art image reconstruction algorithms.

18 citations

Journal ArticleDOI
Zheng Li1, Chaofeng Wang1, Chaofeng Wang2, Jun Wang1, Shihui Ying1, Jun Shi1 
TL;DR: A novel lightweight SR network, named Adaptive Weighted Super-Resolution Network (LW-AWSRN), is proposed to address the issue of large number of parameters to be optimized in convolutional neural network based SR models, which requires heavy computation and thereby limits their real-world applications.

10 citations

Book ChapterDOI
27 Sep 2021
TL;DR: Li et al. as mentioned in this paper proposed a two-stage self-supervised cycle-consistency network (TSCNet) for MR slice interpolation, which synthesizes paired LR-HR images along the sagittal and coronal directions of input LR images, and then a cyclic interpolation procedure based on triplet axial slices is designed in the second-stage SSL for further refinement.
Abstract: The thick-slice magnetic resonance (MR) images are often structurally blurred in coronal and sagittal views, which causes harm to diagnosis and image post-processing. Deep learning (DL) has shown great potential to reconstruct the high-resolution (HR) thin-slice MR images from those low-resolution (LR) cases, which we refer to as the slice interpolation task in this work. However, since it is generally difficult to sample abundant paired LR-HR MR images, the classical fully supervised DL-based models cannot be effectively trained to get robust performance. To this end, we propose a novel Two-stage Self-supervised Cycle-consistency Network (TSCNet) for MR slice interpolation, in which a two-stage self-supervised learning (SSL) strategy is developed for unsupervised DL network training. The paired LR-HR images are synthesized along the sagittal and coronal directions of input LR images for network pretraining in the first-stage SSL, and then a cyclic interpolation procedure based on triplet axial slices is designed in the second-stage SSL for further refinement. More training samples with rich contexts along all directions are exploited as guidance to guarantee the improved interpolation performance. Moreover, a new cycle-consistency constraint is proposed to supervise this cyclic procedure, which encourages the network to reconstruct more realistic HR images. The experimental results on a real MRI dataset indicate that TSCNet achieves superior performance over the conventional and other SSL-based algorithms, and obtains competitive qualitative and quantitative results compared with the fully supervised algorithm.

9 citations

Proceedings ArticleDOI
Zhiyang Lu1, Jun Li2, Zheng Li1, Hongjian He2, Jun Shi1 
13 Apr 2021
TL;DR: This work proposes to explore a new value of the high-pass filtered phase data generated in susceptibility weighted imaging (SWI), and develops an end-to-end Cross-connected Psi -$Net to reconstruct QSM directly from these phase data in SWI without additional pre-processing.
Abstract: Quantitative Susceptibility Mapping (QSM) is a new phase-based technique for quantifying magnetic susceptibility. The existing QSM reconstruction methods generally require complicated pre-processing on high-quality phase data. In this work, we propose to explore a new value of the high-pass filtered phase data generated in susceptibility weighted imaging (SWI), and develop an end-to-end Cross-connected $\Psi -$Net $(\mathrm{C}\Psi -$Net) to reconstruct QSM directly from these phase data in SWI without additional pre-processing. $\mathrm{C}\Psi -$Net adds an intermediate branch in the classical U-Net to form a $\Psi -$like structure. The specially designed dilated interaction block is embedded in each level of this branch to enlarge the receptive fields for capturing more susceptibility information from a wider spatial range of phase images. Moreover, the crossed connections are utilized between branches to implement a multi-resolution feature fusion scheme, which helps $\mathrm{C}\Psi -$Net capture rich contextual information for accurate reconstruction. The experimental results on a human dataset show that $\mathrm{C}\Psi -$Net achieves superior performance in our task over other QSM reconstruction algorithms.

2 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: An approach SMORE1 based on convolutional neural networks (CNNs) that restores image quality by improving resolution and reducing aliasing in MR images is presented and is shown to be visually and quantitatively superior to previously reported methods.
Abstract: High resolution magnetic resonance (MR) images are desired in many clinical and research applications. Acquiring such images with high signal-to-noise (SNR), however, can require a long scan duration, which is difficult for patient comfort, is more costly, and makes the images susceptible to motion artifacts. A very common practical compromise for both 2D and 3D MR imaging protocols is to acquire volumetric MR images with high in-plane resolution, but lower through-plane resolution. In addition to having poor resolution in one orientation, 2D MRI acquisitions will also have aliasing artifacts, which further degrade the appearance of these images. This paper presents an approach SMORE 1 based on convolutional neural networks (CNNs) that restores image quality by improving resolution and reducing aliasing in MR images. 2 This approach is self-supervised, which requires no external training data because the high-resolution and low-resolution data that are present in the image itself are used for training. For 3D MRI, the method consists of only one self-supervised super-resolution (SSR) deep CNN that is trained from the volumetric image data. For 2D MRI, there is a self-supervised anti-aliasing (SAA) deep CNN that precedes the SSR CNN, also trained from the volumetric image data. Both methods were evaluated on a broad collection of MR data, including filtered and downsampled images so that quantitative metrics could be computed and compared, and actual acquired low resolution images for which visual and sharpness measures could be computed and compared. The super-resolution method is shown to be visually and quantitatively superior to previously reported methods.

77 citations

Journal ArticleDOI
TL;DR: The extensive experiments on various MR images, including proton density (PD), T1, and T2 images, show that the proposed CSN model achieves superior performance over other state-of-the-art SISR methods.
Abstract: High resolution magnetic resonance (MR) imaging is desirable in many clinical applications due to its contribution to more accurate subsequent analyses and early clinical diagnoses. Single image super-resolution (SISR) is an effective and cost efficient alternative technique to improve the spatial resolution of MR images. In the past few years, SISR methods based on deep learning techniques, especially convolutional neural networks (CNNs), have achieved the state-of-the-art performance on natural images. However, the information is gradually weakened and training becomes increasingly difficult as the network deepens. The problem is more serious for medical images because lacking high quality and effective training samples makes deep models prone to underfitting or overfitting. Nevertheless, many current models treat the hierarchical features on different channels equivalently, which is not helpful for the models to deal with the hierarchical features discriminatively and targetedly. To this end, we present a novel channel splitting network (CSN) to ease the representational burden of deep models. The proposed CSN model divides the hierarchical features into two branches, i.e., residual branch and dense branch, with different information transmissions. The residual branch is able to promote feature reuse, while the dense branch is beneficial to the exploration of new features. Besides, we also adopt the merge-and-run mapping to facilitate information integration between different branches. The extensive experiments on various MR images, including proton density (PD), T1, and T2 images, show that the proposed CSN model achieves superior performance over other state-of-the-art SISR methods.

75 citations

Journal ArticleDOI
TL;DR: The modified CNNBCN model with a modified activation function for the magnetic resonance imaging classification of brain tumors achieves satisfactory results in brain tumor image classification and enriches the methodology of neural network design.
Abstract: The diagnosis of brain tumor types generally depends on the clinical experience of doctors, and computer-assisted diagnosis improves the accuracy of diagnosing tumor types. Therefore, a convolutional neural network based on complex networks (CNNBCN) with a modified activation function for the magnetic resonance imaging classification of brain tumors is presented. The network structure is not manually designed and optimized, but is generated by randomly generated graph algorithms. These randomly generated graphs are mapped into a computable neural network by a network generator. The accuracy of the modified CNNBCN model for brain tumor classification reaches 95.49%, which is higher than several models presented by other works. In addition, the test loss of brain tumor classification of the modified CNNBCN model is lower than those of the ResNet, DenseNet and MobileNet models in the experiments. The modified CNNBCN model not only achieves satisfactory results in brain tumor image classification, but also enriches the methodology of neural network design.

59 citations

Journal ArticleDOI
TL;DR: The purpose of this work was to develop and train a conditional generative adversarial network to predict artifact‐free brain images from motion‐corrupted data.
Abstract: Purpose Subject motion in MRI remains an unsolved problem; motion during image acquisition may cause blurring and artifacts that severely degrade image quality. In this work, we approach motion correction as an image-to-image translation problem, which refers to the approach of training a deep neural network to predict an image in 1 domain from an image in another domain. Specifically, the purpose of this work was to develop and train a conditional generative adversarial network to predict artifact-free brain images from motion-corrupted data. Methods An open source MRI data set comprising T2 *-weighted, FLASH magnitude, and phase brain images for 53 patients was used to generate complex image data for motion simulation. To simulate rigid motion, rotations and translations were applied to the image data based on randomly generated motion profiles. A conditional generative adversarial network, comprising a generator and discriminator networks, was trained using the motion-corrupted and corresponding ground truth (original) images as training pairs. Results The images predicted by the conditional generative adversarial network have improved image quality compared to the motion-corrupted images. The mean absolute error between the motion-corrupted and ground-truth images of the test set was 16.4% of the image mean value, whereas the mean absolute error between the conditional generative adversarial network-predicted and ground-truth images was 10.8% The network output also demonstrated improved peak SNR and structural similarity index for all test-set images. Conclusion The images predicted by the conditional generative adversarial network have quantitatively and qualitatively improved image quality compared to the motion-corrupted images.

49 citations