scispace - formally typeset
Search or ask a question

Showing papers by "Habib Zaidi published in 2021"


Journal ArticleDOI
TL;DR: In this article, the authors discuss the foremost applications of artificial intelligence (AI), particularly deep learning (DL) algorithms, in single-photon emission computed tomography (SPECT) and positron emission tomography(PET) imaging.

82 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used radiomic features and clinical data separately or in combination to develop prognostic models for survival (alive or deceased status) prediction of COVID-19 patients using clinical data (demographics and history, laboratory tests, visual scoring by radiologists) and lung/lesion radiomic feature extracted from chest CT images.

67 citations


Journal ArticleDOI
TL;DR: The results demonstrated that the deep learning algorithm is capable of predicting standard full-dose CT images with acceptable quality for the clinical diagnosis of COVID-19 positive patients with substantial radiation dose reduction.
Abstract: The current study aimed to design an ultra-low-dose CT examination protocol using a deep learning approach suitable for clinical diagnosis of COVID-19 patients. In this study, 800, 170, and 171 pairs of ultra-low-dose and full-dose CT images were used as input/output as training, test, and external validation set, respectively, to implement the full-dose prediction technique. A residual convolutional neural network was applied to generate full-dose from ultra-low-dose CT images. The quality of predicted CT images was assessed using root mean square error (RMSE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR). Scores ranging from 1 to 5 were assigned reflecting subjective assessment of image quality and related COVID-19 features, including ground glass opacities (GGO), crazy paving (CP), consolidation (CS), nodular infiltrates (NI), bronchovascular thickening (BVT), and pleural effusion (PE). The radiation dose in terms of CT dose index (CTDIvol) was reduced by up to 89%. The RMSE decreased from 0.16 ± 0.05 to 0.09 ± 0.02 and from 0.16 ± 0.06 to 0.08 ± 0.02 for the predicted compared with ultra-low-dose CT images in the test and external validation set, respectively. The overall scoring assigned by radiologists showed an acceptance rate of 4.72 ± 0.57 out of 5 for reference full-dose CT images, while ultra-low-dose CT images rated 2.78 ± 0.9. The predicted CT images using the deep learning algorithm achieved a score of 4.42 ± 0.8. The results demonstrated that the deep learning algorithm is capable of predicting standard full-dose CT images with acceptable quality for the clinical diagnosis of COVID-19 positive patients with substantial radiation dose reduction. • Ultra-low-dose CT imaging of COVID-19 patients would result in the loss of critical information about lesion types, which could potentially affect clinical diagnosis. • Deep learning–based prediction of full-dose from ultra-low-dose CT images for the diagnosis of COVID-19 could reduce the radiation dose by up to 89%. • Deep learning algorithms failed to recover the correct lesion structure/density for a number of patients considered outliers, and as such, further research and development is warranted to address these limitations.

61 citations


Journal ArticleDOI
TL;DR: A robust radiomics-based classifier is developed capable of accurately predicting overall survival of RCC patients for prognosis of ccRCC patients and may help identifying high-risk patients who require additional treatment and follow up regimens.

53 citations


Journal ArticleDOI
TL;DR: In this article, a modified cycle-consistent generative adversarial network (CycleGAN) and residual neural network (ResNET) were implemented to predict full-dose (FD) PET images.
Abstract: Tendency is to moderate the injected activity and/or reduce acquisition time in PET examinations to minimize potential radiation hazards and increase patient comfort. This work aims to assess the performance of regular full-dose (FD) synthesis from fast/low-dose (LD) whole-body (WB) PET images using deep learning techniques. Instead of using synthetic LD scans, two separate clinical WB 18F-Fluorodeoxyglucose (18F-FDG) PET/CT studies of 100 patients were acquired: one regular FD (~ 27 min) and one fast or LD (~ 3 min) consisting of 1/8th of the standard acquisition time. A modified cycle-consistent generative adversarial network (CycleGAN) and residual neural network (ResNET) models, denoted as CGAN and RNET, respectively, were implemented to predict FD PET images. The quality of the predicted PET images was assessed by two nuclear medicine physicians. Moreover, the diagnostic quality of the predicted PET images was evaluated using a pass/fail scheme for lesion detectability task. Quantitative analysis using established metrics including standardized uptake value (SUV) bias was performed for the liver, left/right lung, brain, and 400 malignant lesions from the test and evaluation datasets. CGAN scored 4.92 and 3.88 (out of 5) (adequate to good) for brain and neck + trunk, respectively. The average SUV bias calculated over normal tissues was 3.39 ± 0.71% and − 3.83 ± 1.25% for CGAN and RNET, respectively. Bland-Altman analysis reported the lowest SUV bias (0.01%) and 95% confidence interval of − 0.36, + 0.47 for CGAN compared with the reference FD images for malignant lesions. CycleGAN is able to synthesize clinical FD WB PET images from LD images with 1/8th of standard injected activity or acquisition time. The predicted FD images present almost similar performance in terms of lesion detectability, qualitative scores, and quantification bias and variance.

53 citations


Journal ArticleDOI
TL;DR: The proposed DNN-based WB internal Dosimetry exhibited comparable performance to the direct Monte Carlo approach while overcoming the limitations of conventional dosimetry techniques in nuclear medicine.
Abstract: In the era of precision medicine, patient-specific dose calculation using Monte Carlo (MC) simulations is deemed the gold standard technique for risk-benefit analysis of radiation hazards and correlation with patient outcome. Hence, we propose a novel method to perform whole-body personalized organ-level dosimetry taking into account the heterogeneity of activity distribution, non-uniformity of surrounding medium, and patient-specific anatomy using deep learning algorithms. We extended the voxel-scale MIRD approach from single S-value kernel to specific S-value kernels corresponding to patient-specific anatomy to construct 3D dose maps using hybrid emission/transmission image sets. In this context, we employed a Deep Neural Network (DNN) to predict the distribution of deposited energy, representing specific S-values, from a single source in the center of a 3D kernel composed of human body geometry. The training dataset consists of density maps obtained from CT images and the reference voxelwise S-values generated using Monte Carlo simulations. Accordingly, specific S-value kernels are inferred from the trained model and whole-body dose maps constructed in a manner analogous to the voxel-based MIRD formalism, i.e., convolving specific voxel S-values with the activity map. The dose map predicted using the DNN was compared with the reference generated using MC simulations and two MIRD-based methods, including Single and Multiple S-Values (SSV and MSV) and Olinda/EXM software package. The predicted specific voxel S-value kernels exhibited good agreement with the MC-based kernels serving as reference with a mean relative absolute error (MRAE) of 4.5 ± 1.8 (%). Bland and Altman analysis showed the lowest dose bias (2.6%) and smallest variance (CI: − 6.6, + 1.3) for DNN. The MRAE of estimated absorbed dose between DNN, MSV, and SSV with respect to the MC simulation reference were 2.6%, 3%, and 49%, respectively. In organ-level dosimetry, the MRAE between the proposed method and MSV, SSV, and Olinda/EXM were 5.1%, 21.8%, and 23.5%, respectively. The proposed DNN-based WB internal dosimetry exhibited comparable performance to the direct Monte Carlo approach while overcoming the limitations of conventional dosimetry techniques in nuclear medicine.

42 citations


Journal ArticleDOI
TL;DR: In this article, a deep convolutional neural network-based approach for fast and reproducible auto-contouring of organs at risk (OARs) in HDR-BT was proposed.

34 citations


Journal ArticleDOI
TL;DR: In this article, the feasibility of treatment response prediction using MRI-based pre-, post-, and delta-radiomic features for locally advanced rectal cancer (LARC) patients treated by neoadjuvant chemoradiation therapy (nCRT).
Abstract: Objectives We evaluate the feasibility of treatment response prediction using MRI-based pre-, post-, and delta-radiomic features for locally advanced rectal cancer (LARC) patients treated by neoadjuvant chemoradiation therapy (nCRT). Materials and methods This retrospective study included 53 LARC patients divided into a training set (Center#1, n = 36) and external validation set (Center#2, n = 17). T2-weighted (T2W) MRI was acquired for all patients, 2 weeks before and 4 weeks after nCRT. Ninety-six radiomic features, including intensity, morphological and second- and high-order texture features were extracted from segmented 3D volumes from T2W MRI. All features were harmonized using ComBat algorithm. Max-Relevance-Min-Redundancy (MRMR) algorithm was used as feature selector and k-nearest neighbors (KNN), Naive Bayes (NB), Random forests (RF), and eXtreme Gradient Boosting (XGB) algorithms were used as classifiers. The evaluation was performed using the area under the receiver operator characteristic (ROC) curve (AUC), sensitivity, specificity and accuracy. Results In univariate analysis, the highest AUC in pre-, post-, and delta-radiomic features were 0.78, 0.70, and 0.71, for GLCM_IMC1, shape (surface area and volume) and GLSZM_GLNU features, respectively. In multivariate analysis, RF and KNN achieved the highest AUC (0.85 ± 0.04 and 0.81 ± 0.14, respectively) among pre- and post-treatment features. The highest AUC was achieved for the delta-radiomic-based RF model (0.96 ± 0.01) followed by NB (0.96 ± 0.04). Overall. Delta-radiomics model, outperformed both pre- and post-treatment features (P-value Conclusion Multivariate analysis of delta-radiomic T2W MRI features using machine learning algorithms could potentially be used for response prediction in LARC patients undergoing nCRT. We also observed that multivariate analysis of delta-radiomic features using RF classifiers can be used as powerful biomarkers for response prediction in LARC.

30 citations


Journal ArticleDOI
TL;DR: The MR-NLM approach exhibited promising performance in terms of noise suppression and signal preservation for PET images, thus translating into higher SNR compared to the conventional NLM approach and the clinical studies confirm the superior performance of the method.
Abstract: Non-local mean (NLM) filtering has been broadly used for denoising of natural and medical images. The NLM filter relies on the redundant information, in the form of repeated patterns/textures, in the target image to discriminate the underlying structures/signals from noise. In PET (or SPECT) imaging, the raw data could be reconstructed using different parameters and settings, leading to different representations of the target image, which contain highly similar structures/signals to the target image contaminated with different noise levels (or properties). In this light, multiple-reconstruction NLM filtering (MR-NLM) is proposed, which relies on the redundant information provided by the different reconstructions of the same PET data (referred to as auxiliary images) to conduct the denoising process. Implementation of the MR-NLM approach involved the use of twelve auxiliary PET images (in addition to the target image) reconstructed using the same iterative reconstruction algorithm with different numbers of iterations and subsets. For each target voxel, the patches of voxels at the same location are extracted from the auxiliary PET images based on which the NLM denoising process is conducted. Through this, the exhaustive search scheme performed in the conventional NLM method to find similar patches of voxels is bypassed. The performance evaluation of the MR-NLM filter was carried out against the conventional NLM, Gaussian and bilateral post-reconstruction approaches using the experimental Jaszczak phantom and 25 whole-body PET/CT clinical studies. The signal-to-noise ratio (SNR) in the experimental Jaszczak phantom study improved from 25.1 when using Gaussian filtering to 27.9 and 28.8 when the conventional NLM and MR-NLM methods were applied (p value < 0.05), respectively. Conversely, the Gaussian filter led to quantification bias of 35.4%, while NLM and MR-NLM approaches resulted in a bias of 32.0% and 31.1% (p value < 0.05), respectively. The clinical studies further confirm the superior performance of the MR-NLM method, wherein the quantitative bias measured in malignant lesions (hot spots) decreased from − 12.3 ± 2.3% when using the Gaussian filter to − 3.5 ± 1.3% and − 2.2 ± 1.2% when using the NLM and MR-NLM approaches (p value < 0.05), respectively. The MR-NLM approach exhibited promising performance in terms of noise suppression and signal preservation for PET images, thus translating into higher SNR compared to the conventional NLM approach. Despite the promising performance of the MR-NLM approach, the additional computational burden owing to the requirement of multiple PET reconstruction still needs to be addressed.

26 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the potential of deep learning-based metal artefact reduction (MAR) in quantitative PET/CT imaging, and proposed a DLI-MAR approach to improve CT-based attenuation and scatter correction.
Abstract: The susceptibility of CT imaging to metallic objects gives rise to strong streak artefacts and skewed information about the attenuation medium around the metallic implants This metal-induced artefact in CT images leads to inaccurate attenuation correction in PET/CT imaging This study investigates the potential of deep learning–based metal artefact reduction (MAR) in quantitative PET/CT imaging Deep learning–based metal artefact reduction approaches were implemented in the image (DLI-MAR) and projection (DLP-MAR) domains The proposed algorithms were quantitatively compared to the normalized MAR (NMAR) method using simulated and clinical studies Eighty metal-free CT images were employed for simulation of metal artefact as well as training and evaluation of the aforementioned MAR approaches Thirty 18F-FDG PET/CT images affected by the presence of metallic implants were retrospectively employed for clinical assessment of the MAR techniques The evaluation of MAR techniques on the simulation dataset demonstrated the superior performance of the DLI-MAR approach (structural similarity (SSIM) = 095 ± 02 compared to 094 ± 02 and 093 ± 03 obtained using DLP-MAR and NMAR, respectively) in minimizing metal artefacts in CT images The presence of metallic artefacts in CT images or PET attenuation correction maps led to quantitative bias, image artefacts and under- and overestimation of scatter correction of PET images The DLI-MAR technique led to a quantitative PET bias of 13 ± 3% compared to 105 ± 6% without MAR and 32 ± 05% achieved by NMAR The DLI-MAR technique was able to reduce the adverse effects of metal artefacts on PET images through the generation of accurate attenuation maps from corrupted CT images • The presence of metallic objects, such as dental implants, gives rise to severe photon starvation, beam hardening and scattering, thus leading to adverse artefacts in reconstructed CT images • The aim of this work is to develop and evaluate a deep learning–based MAR to improve CT-based attenuation and scatter correction in PET/CT imaging • Deep learning–based MAR in the image (DLI-MAR) domain outperformed its counterpart implemented in the projection (DLP-MAR) domain The DLI-MAR approach minimized the adverse impact of metal artefacts on whole-body PET images through generating accurate attenuation maps from corrupted CT images

24 citations


Journal ArticleDOI
TL;DR: In this paper, three state-of-the-art deep learning algorithms combined with 8 different loss functions for PET image segmentation were evaluated on an external validation set of head and neck cancer (HNC) patients.
Abstract: Purpose The availability of automated, accurate, and robust gross tumor volume (GTV) segmentation algorithms is critical for the management of head and neck cancer (HNC) patients. In this work, we evaluated 3 state-of-the-art deep learning algorithms combined with 8 different loss functions for PET image segmentation using a comprehensive training set and evaluated its performance on an external validation set of HNC patients. Patients and methods 18F-FDG PET/CT images of 470 patients presenting with HNC on which manually defined GTVs serving as standard of reference were used for training (340 patients), evaluation (30 patients), and testing (100 patients from different centers) of these algorithms. PET image intensity was converted to SUVs and normalized in the range (0-1) using the SUVmax of the whole data set. PET images were cropped to 12 × 12 × 12 cm3 subvolumes using isotropic voxel spacing of 3 × 3 × 3 mm3 containing the whole tumor and neighboring background including lymph nodes. We used different approaches for data augmentation, including rotation (-15 degrees, +15 degrees), scaling (-20%, 20%), random flipping (3 axes), and elastic deformation (sigma = 1 and proportion to deform = 0.7) to increase the number of training sets. Three state-of-the-art networks, including Dense-VNet, NN-UNet, and Res-Net, with 8 different loss functions, including Dice, generalized Wasserstein Dice loss, Dice plus XEnt loss, generalized Dice loss, cross-entropy, sensitivity-specificity, and Tversky, were used. Overall, 28 different networks were built. Standard image segmentation metrics, including Dice similarity, image-derived PET metrics, first-order, and shape radiomic features, were used for performance assessment of these algorithms. Results The best results in terms of Dice coefficient (mean ± SD) were achieved by cross-entropy for Res-Net (0.86 ± 0.05; 95% confidence interval [CI], 0.85-0.87), Dense-VNet (0.85 ± 0.058; 95% CI, 0.84-0.86), and Dice plus XEnt for NN-UNet (0.87 ± 0.05; 95% CI, 0.86-0.88). The difference between the 3 networks was not statistically significant (P > 0.05). The percent relative error (RE%) of SUVmax quantification was less than 5% in networks with a Dice coefficient more than 0.84, whereas a lower RE% (0.41%) was achieved by Res-Net with cross-entropy loss. For maximum 3-dimensional diameter and sphericity shape features, all networks achieved a RE ≤ 5% and ≤10%, respectively, reflecting a small variability. Conclusions Deep learning algorithms exhibited promising performance for automated GTV delineation on HNC PET images. Different loss functions performed competitively when using different networks and cross-entropy for Res-Net, Dense-VNet, and Dice plus XEnt for NN-UNet emerged as reliable networks for GTV delineation. Caution should be exercised for clinical deployment owing to the occurrence of outliers in deep learning-based algorithms.

Journal ArticleDOI
TL;DR: In this paper, the authors focus on AI-based COVID-19 studies as applies to chest x-ray (CXR) and computed tomography (CT) imaging modalities, and the associated challenges.

Journal ArticleDOI
TL;DR: In this article, the applicability of radiomic features alone and in combination with clinical information for the prediction of renal cell carcinoma (RCC) patients' overall survival after partial or radical nephrectomy was investigated.
Abstract: The aim of this work is to investigate the applicability of radiomic features alone and in combination with clinical information for the prediction of renal cell carcinoma (RCC) patients’ overall survival after partial or radical nephrectomy. Clinical studies of 210 RCC patients from The Cancer Imaging Archive (TCIA) who underwent either partial or radical nephrectomy were included in this study. Regions of interest (ROIs) were manually defined on CT images. A total of 225 radiomic features were extracted and analyzed along with the 59 clinical features. An elastic net penalized Cox regression was used for feature selection. Accelerated failure time (AFT) with the shared frailty model was used to determine the effects of the selected features on the overall survival time. Eleven radiomic and twelve clinical features were selected based on their non-zero coefficients. Tumor grade, tumor malignancy, and pathology t-stage were the most significant predictors of overall survival (OS) among the clinical features (p < 0.002, < 0.02, and < 0.018, respectively). The most significant predictors of OS among the selected radiomic features were flatness, area density, and median (p < 0.02, < 0.02, and < 0.05, respectively). Along with important clinical features, such as tumor heterogeneity and tumor grade, imaging biomarkers such as tumor flatness, area density, and median are significantly correlated with OS of RCC patients.

Journal ArticleDOI
TL;DR: In this article, a multivariable multinomial logistic regression was employed with 1000 bootstrapping samples based on the selected features to classify four main histological subtypes of NSCLC.

Journal ArticleDOI
TL;DR: The feasibility of direct attenuation and scatter correction of whole-body 68Ga-PSMA PET images in the image domain using deep learning with clinically tolerable errors is demonstrated and has the potential of performing attenuation correction on stand-alone PET or PET/MRI systems.
Abstract: Objective This study evaluates the feasibility of direct scatter and attenuation correction of whole-body Ga-68-PSMA PET images in the image domain using deep learning. Methods Whole-body Ga-68-PSMA PET images of 399 subjects were used to train a residual deep learning model, taking PET non-attenuation-corrected images (PET-nonAC) as input and CT-based attenuation-corrected PET images (PET-CTAC) as target (reference). Forty-six whole-body Ga-68-PSMA PET images were used as an independent validation dataset. For validation, synthetic deep learning-based attenuation-corrected PET images were assessed considering the corresponding PET-CTAC images as reference. The evaluation metrics included the mean absolute error (MAE) of the SUV, peak signal-to-noise ratio, and structural similarity index (SSIM) in the whole body, as well as in different regions of the body, namely, head and neck, chest, and abdomen and pelvis. Results The deep learning-guided direct attenuation and scatter correction produced images of comparable visual quality to PET-CTAC images. It achieved an MAE, relative error (RE%), SSIM, and peak signal-to-noise ratio of 0.91 +/- 0.29 (SUV), -2.46% +/- 10.10%, 0.973 +/- 0.034, and 48.171 +/- 2.964, respectively, within whole-body images of the independent external validation dataset. The largest RE% was observed in the head and neck region (-5.62% +/- 11.73%), although this region exhibited the highest value of SSIM metric (0.982 +/- 0.024). The MAE (SUV) and RE% within the different regions of the body were less than 2.0% and 6%, respectively, indicating acceptable performance of the deep learning model. Conclusions This work demonstrated the feasibility of direct attenuation and scatter correction of whole-body Ga-68-PSMA PET images in the image domain using deep learning with clinically tolerable errors. The technique has the potential of performing attenuation correction on stand-alone PET or PET/MRI systems.

Journal ArticleDOI
TL;DR: In this article, a guideline curriculum related to Artificial Intelligence (AI) for the education and training of European Medical Physicists (MPs) is presented. But the learning outcomes of the training are presented as knowledge, skills and competences (KSC approach).

Journal ArticleDOI
TL;DR: In this paper, the authors developed multi-modality radiomic models by integrating information extracted from 18F-FDG PET and CT images using feature- and image-level fusions, toward improved prognosis for non-small cell lung carcinoma (NSCLC) patients.
Abstract: We developed multi-modality radiomic models by integrating information extracted from18F-FDG PET and CT images using feature- and image-level fusions, toward improved prognosis for non-small cell lung carcinoma (NSCLC) patients. Two independent cohorts of NSCLC patients from two institutions (87 and 95 patients) were cycled as training and testing datasets. Fusion approaches were applied at two levels, namely feature- and image-levels. For feature-level fusion, radiomic features were extracted individually from CT and PET images and concatenated. Alternatively, radiomic features extracted separately from CT and PET images were averaged. For image-level fusion, wavelet fusion was utilized and tuned with two parameters, namely CT weight and Wavelet Band Pass Filtering Ratio. Clinical and combined clinical + radiomic models were developed. Gray level discretization was performed at 3 different levels (16, 32 and 64) and 225 radiomics features were extracted. Overall survival (OS) was considered as the endpoint. For feature reduction, correlated (redundant) features were excluded using Spearman's correlation, and best combination of top ten features with highest concordance-indices (via univariate Cox model) were selected in each model for further multivariate Cox model. Moreover, prognostic score's median, obtained from the training cohort, was used intact in the testing cohort as a threshold to classify patients into low- versus high-risk groups, and log-rank test was applied to assess differences between the Kaplan-Meier curves. Overall, while models based on feature-level fusion strategy showed limited superiority over single-modalities, image-level fusion strategy significantly outperformed both single-modality and feature-level fusion strategies. As such, the clinical model (C-index = 0.656) outperformed all models from single-modality and feature-level strategies, but was outperformed by certain models from image-level fusion strategy. Our findings indicated that image-level fusion multi-modality radiomics models outperformed single-modality, feature-level fusion, and clinical models for OS prediction of NSCLC patients.

Journal ArticleDOI
TL;DR: A recent review as mentioned in this paper reflects the tremendous interest in quantitative molecular imaging using machine learning and deep learning (ML/DL) techniques during the past decade, ranging from the basic principles of ML/DL techniques to the various steps required for obtaining quantitatively accurate PET data, including algorithms used to denoise or correct for physical degrading factors as well as to quantify tracer uptake and metabolic tumor volume for treatment monitoring or radiation therapy treatment planning and response prediction.
Abstract: The widespread availability of high-performance computing and the popularity of artificial intelligence (AI) with machine learning and deep learning (ML/DL) algorithms at the helm have stimulated the development of many applications involving the use of AI-based techniques in molecular imaging research. Applications reported in the literature encompass various areas, including innovative design concepts in positron emission tomography (PET) instrumentation, quantitative image reconstruction and analysis techniques, computer-aided detection and diagnosis, as well as modeling and prediction of outcomes. This review reflects the tremendous interest in quantitative molecular imaging using ML/DL techniques during the past decade, ranging from the basic principles of ML/DL techniques to the various steps required for obtaining quantitatively accurate PET data, including algorithms used to denoise or correct for physical degrading factors as well as to quantify tracer uptake and metabolic tumor volume for treatment monitoring or radiation therapy treatment planning and response prediction.This review also addresses future opportunities and current challenges facing the adoption of ML/DL approaches and their role in multimodality imaging.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a tensor decomposition and anisotropic total variation regularization model (TDATV) for sparse-view helical CT (SHCT) images.
Abstract: Sparse-view scanning has great potential for realizing ultra-low-dose computed tomography (CT) examination. However, noise and artifacts in reconstructed images are big obstacles, which must be handled to maintain the diagnosis accuracy. Existing sparse-view CT reconstruction algorithms were usually designed for circular imaging geometry, whereas the helical imaging geometry is commonly adopted in the clinic. In this paper, we show that the sparse-view helical CT (SHCT) images contain not only noise and artifacts but also severe anatomical distortions. These troubles reduce the applicability of existing sparse-view CT reconstruction algorithms. To deal with this problem, we analyzed the three-dimensional (3D) anatomical structure sparsity in SHCT images. Based on the analyses, we proposed a tensor decomposition and anisotropic total variation regularization model (TDATV) for SHCT reconstruction. Specifically, the tensor decomposition works on nonlocal cube groups to exploit the anatomical structure redundancy; the anisotropic total variation works on the whole volume to exploit the structural piecewise-smooth. Finally, an alternating direction method of multipliers is developed to solve the TDATV model. To our knowledge, the paper presents the first work investigating the reconstruction of sparse-view helical CT. The TDATV model was validated through digital phantom, physical phantom, and clinical patient studies. The results reveal that SHCT could serve as a potential solution for reducing HCT radiation dose to ultra-low level by using the proposed TDATV model.

Journal ArticleDOI
TL;DR: In this article, a generative adversarial network was implemented to predict non-gated standard-dose images in the projection space at different dose reduction levels, and the results showed that the noise was effectively suppressed by the proposed network.
Abstract: This work was set out to investigate the feasibility of dose reduction in SPECT myocardial perfusion imaging (MPI) without sacrificing diagnostic accuracy. A deep learning approach was proposed to synthesize full-dose images from the corresponding low-dose images at different dose reduction levels in the projection space. Clinical SPECT-MPI images of 345 patients acquired on a dedicated cardiac SPECT camera in list-mode format were retrospectively employed to predict standard-dose from low-dose images at half-, quarter-, and one-eighth-dose levels. To simulate realistic low-dose projections, 50%, 25%, and 12.5% of the events were randomly selected from the list-mode data through applying binomial subsampling. A generative adversarial network was implemented to predict non-gated standard-dose SPECT images in the projection space at the different dose reduction levels. Well-established metrics, including peak signal-to-noise ratio (PSNR), root mean square error (RMSE), and structural similarity index metrics (SSIM) in addition to Pearson correlation coefficient analysis and clinical parameters derived from Cedars-Sinai software were used to quantitatively assess the predicted standard-dose images. For clinical evaluation, the quality of the predicted standard-dose images was evaluated by a nuclear medicine specialist using a seven-point (− 3 to + 3) grading scheme. The highest PSNR (42.49 ± 2.37) and SSIM (0.99 ± 0.01) and the lowest RMSE (1.99 ± 0.63) were achieved at a half-dose level. Pearson correlation coefficients were 0.997 ± 0.001, 0.994 ± 0.003, and 0.987 ± 0.004 for the predicted standard-dose images at half-, quarter-, and one-eighth-dose levels, respectively. Using the standard-dose images as reference, the Bland–Altman plots sketched for the Cedars-Sinai selected parameters exhibited remarkably less bias and variance in the predicted standard-dose images compared with the low-dose images at all reduced dose levels. Overall, considering the clinical assessment performed by a nuclear medicine specialist, 100%, 80%, and 11% of the predicted standard-dose images were clinically acceptable at half-, quarter-, and one-eighth-dose levels, respectively. The noise was effectively suppressed by the proposed network, and the predicted standard-dose images were comparable to reference standard-dose images at half- and quarter-dose levels. However, recovery of the underlying signals/information in low-dose images beyond a quarter of the standard dose would not be feasible (due to very poor signal-to-noise ratio) which will adversely affect the clinical interpretation of the resulting images.

Journal ArticleDOI
TL;DR: In this paper, a deep learning-based automated whole lung and COVID-19 pneumonia infectious lesions (COLI-Net) detection and segmentation from chest computed tomography (CT) images was presented.
Abstract: We present a deep learning (DL)-based automated whole lung and COVID-19 pneumonia infectious lesions (COLI-Net) detection and segmentation from chest computed tomography (CT) images. This multicenter/multiscanner study involved 2368 (347′259 2D slices) and 190 (17 341 2D slices) volumetric CT exams along with their corresponding manual segmentation of lungs and lesions, respectively. All images were cropped, resized, and the intensity values clipped and normalized. A residual network with non-square Dice loss function built upon TensorFlow was employed. The accuracy of lung and COVID-19 lesions segmentation was evaluated on an external reverse transcription-polymerase chain reaction positive COVID-19 dataset (7′333 2D slices) collected at five different centers. To evaluate the segmentation performance, we calculated different quantitative metrics, including radiomic features. The mean Dice coefficients were 0.98 ± 0.011 (95% CI, 0.98–0.99) and 0.91 ± 0.038 (95% CI, 0.90–0.91) for lung and lesions segmentation, respectively. The mean relative Hounsfield unit differences were 0.03 ± 0.84% (95% CI, −0.12 to 0.18) and −0.18 ± 3.4% (95% CI, −0.8 to 0.44) for the lung and lesions, respectively. The relative volume difference for lung and lesions were 0.38 ± 1.2% (95% CI, 0.16–0.59) and 0.81 ± 6.6% (95% CI, −0.39 to 2), respectively. Most radiomic features had a mean relative error less than 5% with the highest mean relative error achieved for the lung for the range first-order feature (−6.95%) and least axis length shape feature (8.68%) for lesions. We developed an automated DL-guided three-dimensional whole lung and infected regions segmentation in COVID-19 patients to provide fast, consistent, robust, and human error immune framework for lung and pneumonia lesion detection and quantification.

Journal ArticleDOI
TL;DR: In this paper, an automated deep learning-assisted scan range selection technique was developed to reduce radiation dose to patients. But the proposed DL-based solution outperformed previous automatic methods with acceptable accuracy, even in complicated and challenging cases.
Abstract: Background Despite the prevalence of chest CT in the clinic, concerns about unoptimized protocols delivering high radiation doses to patients still remain. This study aimed to assess the additional radiation dose associated with overscanning in chest CT and to develop an automated deep learning-assisted scan range selection technique to reduce radiation dose to patients. Results A significant overscanning range (31 ± 24) mm was observed in clinical setting for over 95% of the cases. The average Dice coefficient for lung segmentation was 0.96 and 0.97 for anterior-posterior (AP) and lateral projections, respectively. By considering the exact lung coverage as the ground truth, and AP and lateral projections as input, The DL-based approach resulted in errors of 0.08 ± 1.46 and - 1.5 ± 4.1 mm in superior and inferior directions, respectively. In contrast, the error on external scout views was - 0.7 ± 4.08 and 0.01 ± 14.97 mm for superior and inferior directions, respectively.The ED reduction achieved by automated scan range selection was 21% in the test group. The evaluation of a large multi-centric chest CT dataset revealed unnecessary ED of more than 2 mSv per scan and 67% increase in the thyroid absorbed dose. Conclusion The proposed DL-based solution outperformed previous automatic methods with acceptable accuracy, even in complicated and challenging cases. The generizability of the model was demonstrated by fine-tuning the model on AP scout views and achieving acceptable results. The method can reduce the unoptimized dose to patients by exclunding unnecessary organs from field of view.

Posted ContentDOI
13 Apr 2021-medRxiv
TL;DR: In this article, a deep learning-based automated whole lung and COVID-19 pneumonia infectious lesions (COLI-Net) detection and segmentation from chest CT images was presented.
Abstract: BackgroundWe present a deep learning (DL)-based automated whole lung and COVID-19 pneumonia infectious lesions (COLI-Net) detection and segmentation from chest CT images. MethodsWe prepared 2358 (347259, 2D slices) and 180 (17341, 2D slices) volumetric CT images along with their corresponding manual segmentation of lungs and lesions, respectively, in the framework of a multi-center/multi-scanner study. All images were cropped, resized and the intensity values clipped and normalized. A residual network (ResNet) with non-square Dice loss function built upon TensorFlow was employed. The accuracy of lung and COVID-19 lesions segmentation was evaluated on an external RT-PCR positive COVID-19 dataset (7333, 2D slices) collected at five different centers. To evaluate the segmentation performance, we calculated different quantitative metrics, including radiomic features. ResultsThe mean Dice coefficients were 0.98{+/-}0.011 (95% CI, 0.98-0.99) and 0.91{+/-}0.038 (95% CI, 0.90-0.91) for lung and lesions segmentation, respectively. The mean relative Hounsfield unit differences were 0.03{+/-}0.84% (95% CI, -0.12 - 0.18) and -0.18{+/-}3.4% (95% CI, -0.8 - 0.44) for the lung and lesions, respectively. The relative volume difference for lung and lesions were 0.38{+/-}1.2% (95% CI, 0.16-0.59) and 0.81{+/-}6.6% (95% CI, -0.39-2), respectively. Most radiomic features had a mean relative error less than 5% with the highest mean relative error achieved for the lung for the Range first-order feature (- 6.95%) and least axis length shape feature (8.68%) for lesions. ConclusionWe set out to develop an automated deep learning-guided three-dimensional whole lung and infected regions segmentation in COVID-19 patients in order to develop fast, consistent, robust and human error immune framework for lung and pneumonia lesion detection and quantification.

Journal ArticleDOI
TL;DR: In this article, a deep neural network (DNN) model was developed to synthesize full-dose (FD) time-of-flight (TOF) bin sinograms from their corresponding fast/lowdose (LD) TOF bin sinogram.

Journal ArticleDOI
TL;DR: In this article, a deep neural network was trained to predict personalized dose distributions derived from MC simulations, serving as ground truth, with the paired channel input used for the training is composed of dose distribution kernel in water medium along with the full-volumetric density maps obtained from CT images reflecting medium heterogeneity.

Journal ArticleDOI
TL;DR: In this paper, the potential impact of restaging PET/CT on changes in the management of recurrent prostate cancer after radical prostatectomy (RP) is reviewed, which addresses potential adaptation of prostate bed radiation therapy target volumes and doses, as well as use of androgen-deprivation therapy (ADT).
Abstract: Biochemical recurrence is a clinical situation experienced by 20 to 40% of prostate cancer patients treated with radical prostatectomy (RP). Prostate bed (PB) radiation therapy (RT) remains the mainstay salvage treatment, although it remains non-curative for up to 30% of patients developing further recurrence. Positron emission tomography with computed tomography (PET/CT) using prostate cancer-targeting radiotracers has emerged in the last decade as a new-generation imaging technique characterized by a better restaging accuracy compared to conventional imaging. By adapting targeting of recurrence sites and modulating treatment management, implementation in clinical practice of restaging PET/CT is challenging the established therapeutic standards born from randomized controlled trials. This article reviews the potential impact of restaging PET/CT on changes in the management of recurrent prostate cancer after RP. Based on PET/CT findings, it addresses potential adaptation of RT target volumes and doses, as well as use of androgen-deprivation therapy (ADT). However, the impact of such management changes on the oncological outcomes of PET/CT-based salvage RT strategies is as yet unknown.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an attention-based multi-feature pyramid unet (MFP-Unet) for automatic segmentation and measurement of fetal biometric parameters, including biparietal diameter, head circumference, abdominal circumference, and femur length.

Journal ArticleDOI
TL;DR: In this paper, a stochastic adversarial video prediction model was implemented to predict the last 13 frames (25-90 min) from the initial 13 frames, and the predicted dynamic images demonstrated that the model is capable of predicting the trend of change in timevarying tracer biodistribution.
Abstract: Purpose We assess the performance of a recurrent frame generation algorithm for prediction of late frames from initial frames in dynamic brain PET imaging. Methods Clinical dynamic 18 F-DOPA brain PET/CT studies of 46 subjects with ten folds cross-validation were retrospectively employed. A novel stochastic adversarial video prediction model was implemented to predict the last 13 frames (25-90 min) from the initial 13 frames (0-25 min). The quantitative analysis of the predicted dynamic PET frames was performed for the test and validation dataset using established metrics. Results The predicted dynamic images demonstrated that the model is capable of predicting the trend of change in time-varying tracer biodistribution. The Bland-Altman plots reported the lowest tracer uptake bias (-0.04) for the putamen region and the smallest variance (95% CI: -0.38, +0.14) for the cerebellum. The region-wise Patlak graphical analysis in the caudate and putamen regions for 8 subjects from the test and validation dataset showed that the average bias for Ki and distribution volume was 4.3%, 5.1% and 4.4%, 4.2%, (p-value Conclusion We have developed a novel deep learning approach for fast dynamic brain PET imaging capable of generating the last 65 min time frames from the initial 25 min frames, thus enabling significant reduction in scanning time.

Journal ArticleDOI
TL;DR: In this paper, the intrinsic efficiency and energy resolution of different types of solid-state gamma-ray detectors in order to generate a precise dual-energy X-ray beam from the conventional x-ray tube using external Xray filters was estimated using Monte Carlo simulations.

Journal ArticleDOI
TL;DR: In 2018, the European Federation of Organisations for Medical Physics (EFOMP) published an editorial on Artificial Intelligence in relation to the medical physics profession as mentioned in this paper, and in order to meet the educational needs of the Medical Physicist (MP) in this new area of AI, EFOMP announced in June 2019 the creation of a 2 years Working Group (WG) entitled "Artificial Intelligence (AI)”.