scispace - formally typeset
Search or ask a question
Author

C. Cozzini

Bio: C. Cozzini is an academic researcher from GE Healthcare. The author has contributed to research in topics: Correction for attenuation & Image resolution. The author has an hindex of 2, co-authored 3 publications receiving 60 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A method for converting Zero TE MR images into X‐ray attenuation information in the form of pseudo‐CT images is described and its performance for attenuation correction in PET/MR and dose planning in MR‐guided radiation therapy planning (RTP) is demonstrated.
Abstract: Purpose: To describe a method for converting Zero TE (ZTE) MR images into Xray attenuation information in the form of pseudo-CT images and demonstrate its performance for (1) attenuation correction ...

80 citations

Journal ArticleDOI
TL;DR: A new method for in‐phase zero TE (ipZTE) musculoskeletal MR imaging and its applications in medicine and sport are introduced.
Abstract: Purpose To introduce a new method for in-phase zero TE (ipZTE) musculoskeletal MR imaging. Methods ZTE is a 3D radial imaging method, which is sensitive to chemical shift off-resonance signal interference, especially around fat-water tissue interfaces. The ipZTE method addresses this fat-water chemical shift artifact by acquiring each 3D radial spoke at least twice with varying readout gradient amplitude and hence varying effective sampling time. Using k-space-based chemical shift decomposition, the acquired data is then reconstructed into an in-phase ZTE image and an out-of-phase disturbance. Results The ipZTE method was tested for knee, pelvis, brain, and whole-body. The obtained images demonstrate exceptional soft-tissue uniformity free from out-of-phase disturbances apparent in the original ZTE images. The chemical shift decomposition was found to improve SNR at the cost of reduced image resolution. Conclusion The ipZTE method can be used as an averaging mechanism to eliminate fat-water chemical shift artifacts and improve SNR. The method is expected to improve ZTE-based musculoskeletal imaging and pseudo CT conversion as required for PET/MR attenuation correction and MR-guided radiation therapy planning.

12 citations


Cited by
More filters
Journal ArticleDOI
P.H. King1
01 Jan 1986

98 citations

Journal ArticleDOI
TL;DR: This article is an introductory overview aimed at clinical radiologists with no experience in deep‐learning‐based MR image reconstruction and should enable them to understand the basic concepts and current clinical applications of this rapidly growing area of research across multiple organ systems.
Abstract: Artificial intelligence (AI) shows tremendous promise in the field of medical imaging, with recent breakthroughs applying deep-learning models for data acquisition, classification problems, segmentation, image synthesis, and image reconstruction. With an eye towards clinical applications, we summarize the active field of deep-learning-based MR image reconstruction. We review the basic concepts of how deep-learning algorithms aid in the transformation of raw k-space data to image data, and specifically examine accelerated imaging and artifact suppression. Recent efforts in these areas show that deep-learning-based algorithms can match and, in some cases, eclipse conventional reconstruction methods in terms of image quality and computational efficiency across a host of clinical imaging applications, including musculoskeletal, abdominal, cardiac, and brain imaging. This article is an introductory overview aimed at clinical radiologists with no experience in deep-learning-based MR image reconstruction and should enable them to understand the basic concepts and current clinical applications of this rapidly growing area of research across multiple organ systems.

89 citations

Journal ArticleDOI
TL;DR: Machine learning and advanced atlas-based methods exhibited promising performance by achieving reliable organ segmentation and synthetic CT generation and the challenge of electron density estimation from MR images can be resolved with a clinically tolerable error.
Abstract: Purpose Methods Magnetic resonance imaging (MRI)-guided radiation therapy (RT) treatment planning is limited by the fact that the electron density distribution required for dose calculation is not readily provided by MR imaging. We compare a selection of novel synthetic CT generation algorithms recently reported in the literature, including segmentation-based, atlas-based and machine learning techniques, using the same cohort of patients and quantitative evaluation metrics. Six MRI-guided synthetic CT generation algorithms were evaluated: one segmentation technique into a single tissue class (water-only), four atlas-based techniques, namely, median value of atlas images (ALMedian), atlas-based local weighted voting (ALWV), bone enhanced atlas-based local weighted voting (ALWV-Bone), iterative atlas-based local weighted voting (ALWV-Iter), and a machine learning technique using deep convolution neural network (DCNN). Results Conclusions Organ auto-contouring from MR images was evaluated for bladder, rectum, bones, and body boundary. Overall, DCNN exhibited higher segmentation accuracy resulting in Dice indices (DSC) of 0.93 +/- 0.17, 0.90 +/- 0.04, and 0.93 +/- 0.02 for bladder, rectum, and bones, respectively. On the other hand, ALMedian showed the lowest accuracy with DSC of 0.82 +/- 0.20, 0.81 +/- 0.08, and 0.88 +/- 0.04, respectively. DCNN reached the best performance in terms of accurate derivation of synthetic CT values within each organ, with a mean absolute error within the body contour of 32.7 +/- 7.9 HU, followed by the advanced atlas-based methods (ALWV: 40.5 +/- 8.2 HU, ALWV-Iter: 42.4 +/- 8.1 HU, ALWV-Bone: 44.0 +/- 8.9 HU). ALMedian led to the highest error (52.1 +/- 11.1 HU). Considering the dosimetric evaluation results, ALWV-Iter, ALWV, DCNN and ALWV-Bone led to similar mean dose estimation within each organ at risk and target volume with less than 1% dose discrepancy. However, the two-dimensional gamma analysis demonstrated higher pass rates for ALWV-Bone, DCNN, ALMedian and ALWV-Iter at 1%/1 mm criterion with 94.99 +/- 5.15%, 94.59 +/- 5.65%, 93.68 +/- 5.53% and 93.10 +/- 5.99% success, respectively, while ALWV and water-only resulted in 86.91 +/- 13.50% and 80.77 +/- 12.10%, respectively. Overall, machine learning and advanced atlas-based methods exhibited promising performance by achieving reliable organ segmentation and synthetic CT generation. DCNN appears to have slightly better performance by achieving accurate automated organ segmentation and relatively small dosimetric errors (followed closely by advanced atlas-based methods, which in some cases achieved similar performance). However, the DCNN approach showed higher vulnerability to anatomical variation, where a greater number of outliers was observed with this method. Considering the dosimetric results obtained from the evaluated methods, the challenge of electron density estimation from MR images can be resolved with a clinically tolerable error.

85 citations

Journal ArticleDOI
TL;DR: An automated approach is developed that allows generation of a continuously valued pseudo-CT from a single 18F-FDG non-attenuation-corrected (NAC) PET image and evaluated it in PET/CT brain imaging and provides quantitatively accurate 18F -FDG PET results with average errors of less than 1% in most brain regions.
Abstract: To develop and evaluate the feasibility of a data-driven deep learning approach (deepAC) for positron-emission tomography (PET) image attenuation correction without anatomical imaging. A PET attenuation correction pipeline was developed utilizing deep learning to generate continuously valued pseudo-computed tomography (CT) images from uncorrected 18F-fluorodeoxyglucose (18F-FDG) PET images. A deep convolutional encoder-decoder network was trained to identify tissue contrast in volumetric uncorrected PET images co-registered to CT data. A set of 100 retrospective 3D FDG PET head images was used to train the model. The model was evaluated in another 28 patients by comparing the generated pseudo-CT to the acquired CT using Dice coefficient and mean absolute error (MAE) and finally by comparing reconstructed PET images using the pseudo-CT and acquired CT for attenuation correction. Paired-sample t tests were used for statistical analysis to compare PET reconstruction error using deepAC with CT-based attenuation correction. deepAC produced pseudo-CTs with Dice coefficients of 0.80 ± 0.02 for air, 0.94 ± 0.01 for soft tissue, and 0.75 ± 0.03 for bone and MAE of 111 ± 16 HU relative to the PET/CT dataset. deepAC provides quantitatively accurate 18F-FDG PET results with average errors of less than 1% in most brain regions. We have developed an automated approach (deepAC) that allows generation of a continuously valued pseudo-CT from a single 18F-FDG non-attenuation-corrected (NAC) PET image and evaluated it in PET/CT brain imaging.

83 citations

Journal ArticleDOI
TL;DR: To study the influence of gradient echo–based contrasts as input channels to a 3D patch‐based neural network trained for synthetic CT (sCT) generation in canine and human populations, a neural network model is constructed.
Abstract: PURPOSE: To study the influence of gradient echo-based contrasts as input channels to a 3D patch-based neural network trained for synthetic CT (sCT) generation in canine and human populations. METHODS: Magnetic resonance images and CT scans of human and canine pelvic regions were acquired and paired using nonrigid registration. Magnitude MR images and Dixon reconstructed water, fat, in-phase and opposed-phase images were obtained from a single T1 -weighted multi-echo gradient-echo acquisition. From this set, 6 input configurations were defined, each containing 1 to 4 MR images regarded as input channels. For each configuration, a UNet-derived deep learning model was trained for synthetic CT generation. Reconstructed Hounsfield unit maps were evaluated with peak SNR, mean absolute error, and mean error. Dice similarity coefficient and surface distance maps assessed the geometric fidelity of bones. Repeatability was estimated by replicating the training up to 10 times. RESULTS: Seventeen canines and 23 human subjects were included in the study. Performance and repeatability of single-channel models were dependent on the TE-related water-fat interference with variations of up to 17% in mean absolute error, and variations of up to 28% specifically in bones. Repeatability, Dice similarity coefficient, and mean absolute error were statistically significantly better in multichannel models with mean absolute error ranging from 33 to 40 Hounsfield units in humans and from 35 to 47 Hounsfield units in canines. CONCLUSION: Significant differences in performance and robustness of deep learning models for synthetic CT generation were observed depending on the input. In-phase images outperformed opposed-phase images, and Dixon reconstructed multichannel inputs outperformed single-channel inputs.

69 citations