scispace - formally typeset
Search or ask a question
Author

Mikael Bylund

Bio: Mikael Bylund is an academic researcher from Umeå University. The author has contributed to research in topics: Magnetic resonance imaging & Bayesian statistics. The author has an hindex of 5, co-authored 7 publications receiving 126 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A method for converting Zero TE MR images into X‐ray attenuation information in the form of pseudo‐CT images is described and its performance for attenuation correction in PET/MR and dose planning in MR‐guided radiation therapy planning (RTP) is demonstrated.
Abstract: Purpose: To describe a method for converting Zero TE (ZTE) MR images into Xray attenuation information in the form of pseudo-CT images and demonstrate its performance for (1) attenuation correction ...

80 citations

Journal ArticleDOI
TL;DR: The StyleGAN is a promising model to use for generating synthetic medical images for MR and CT modalities as well as for 3D volumes.
Abstract: Introduction This paper explores the potential of the StyleGAN model as an high-resolution image generator for synthetic medical images The possibility to generate sample patient images of different modalities can be helpful for training deep learning algorithms as eg a data augmentation technique Methods The StyleGAN model was trained on Computed Tomography (CT) and T2- weighted Magnetic Resonance (MR) images from 100 patients with pelvic malignancies The resulting model was investigated with regards to three features: Image Modality, Sex, and Longitudinal Slice Position Further, the style transfer feature of the StyleGAN was used to move images between the modalities The root-mean-squard error (RMSE) and the Mean Absolute Error (MAE) were used to quantify errors for MR and CT, respectively Results We demonstrate how these features can be transformed by manipulating the latent style vectors, and attempt to quantify how the errors change as we move through the latent style space The best results were achieved by using the style transfer feature of the StyleGAN (587 HU MAE for MR to CT and 0339 RMSE for CT to MR) Slices below and above an initial central slice can be predicted with an error below 75 HU MAE and 03 RMSE within 4 cm for CT and MR, respectively Discussion The StyleGAN is a promising model to use for generating synthetic medical images for MR and CT modalities as well as for 3D volumes

46 citations

Journal ArticleDOI
TL;DR: Patient-induced susceptibility distortions at high field strengths in closed bore magnetic resonance scanners are larger than residual system distortions after using vendor-supplied 3-dimensional correction for the delineated regions studied.
Abstract: Purpose: To investigate the effect of magnetic resonance system- and patient-induced susceptibility distortions from a 3T scanner on dose distributions for prostate cancers.Methods and Materials: C ...

28 citations

Journal ArticleDOI
TL;DR: The overall effect of MRI geometric distortions on data used for RTP was minimal, but user-defined subvolume shimming introduced significant errors in nearby organs and should probably be avoided.
Abstract: Purpose: To evaluate the effect of magnetic resonance (MR) imaging (MRI) geometric distortions on head and neck radiation therapy treatment planning (RTP) for an MRI-only RTP. We also assessed the ...

23 citations

Journal ArticleDOI
TL;DR: A fundamental requirement for safe use of magnetic resonance imaging (MRI) in radiotherapy is geometrical accuracy, and one factor that can introduce geometric distortion is noise.

12 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Data augmentation aims to generate additional data which is used to train the model and has been shown to improve performance when validated on a separate unseen dataset as discussed by the authors, which has become a popular method for increasing the size of a training dataset, particularly in fields where large datasets aren't typically available.
Abstract: Research in artificial intelligence for radiology and radiotherapy has recently become increasingly reliant on the use of deep learning-based algorithms. While the performance of the models which these algorithms produce can significantly outperform more traditional machine learning methods, they do rely on larger datasets being available for training. To address this issue, data augmentation has become a popular method for increasing the size of a training dataset, particularly in fields where large datasets aren't typically available, which is often the case when working with medical images. Data augmentation aims to generate additional data which is used to train the model and has been shown to improve performance when validated on a separate unseen dataset. This approach has become commonplace so to help understand the types of data augmentation techniques used in state-of-the-art deep learning models, we conducted a systematic review of the literature where data augmentation was utilised on medical images (limited to CT and MRI) to train a deep learning model. Articles were categorised into basic, deformable, deep learning or other data augmentation techniques. As artificial intelligence models trained using augmented data make their way into the clinic, this review aims to give an insight to these techniques and confidence in the validity of the models produced.

197 citations

Journal ArticleDOI
P.H. King1
01 Jan 1986

98 citations

Journal ArticleDOI
TL;DR: This article is an introductory overview aimed at clinical radiologists with no experience in deep‐learning‐based MR image reconstruction and should enable them to understand the basic concepts and current clinical applications of this rapidly growing area of research across multiple organ systems.
Abstract: Artificial intelligence (AI) shows tremendous promise in the field of medical imaging, with recent breakthroughs applying deep-learning models for data acquisition, classification problems, segmentation, image synthesis, and image reconstruction. With an eye towards clinical applications, we summarize the active field of deep-learning-based MR image reconstruction. We review the basic concepts of how deep-learning algorithms aid in the transformation of raw k-space data to image data, and specifically examine accelerated imaging and artifact suppression. Recent efforts in these areas show that deep-learning-based algorithms can match and, in some cases, eclipse conventional reconstruction methods in terms of image quality and computational efficiency across a host of clinical imaging applications, including musculoskeletal, abdominal, cardiac, and brain imaging. This article is an introductory overview aimed at clinical radiologists with no experience in deep-learning-based MR image reconstruction and should enable them to understand the basic concepts and current clinical applications of this rapidly growing area of research across multiple organ systems.

89 citations

Journal ArticleDOI
TL;DR: Machine learning and advanced atlas-based methods exhibited promising performance by achieving reliable organ segmentation and synthetic CT generation and the challenge of electron density estimation from MR images can be resolved with a clinically tolerable error.
Abstract: Purpose Methods Magnetic resonance imaging (MRI)-guided radiation therapy (RT) treatment planning is limited by the fact that the electron density distribution required for dose calculation is not readily provided by MR imaging. We compare a selection of novel synthetic CT generation algorithms recently reported in the literature, including segmentation-based, atlas-based and machine learning techniques, using the same cohort of patients and quantitative evaluation metrics. Six MRI-guided synthetic CT generation algorithms were evaluated: one segmentation technique into a single tissue class (water-only), four atlas-based techniques, namely, median value of atlas images (ALMedian), atlas-based local weighted voting (ALWV), bone enhanced atlas-based local weighted voting (ALWV-Bone), iterative atlas-based local weighted voting (ALWV-Iter), and a machine learning technique using deep convolution neural network (DCNN). Results Conclusions Organ auto-contouring from MR images was evaluated for bladder, rectum, bones, and body boundary. Overall, DCNN exhibited higher segmentation accuracy resulting in Dice indices (DSC) of 0.93 +/- 0.17, 0.90 +/- 0.04, and 0.93 +/- 0.02 for bladder, rectum, and bones, respectively. On the other hand, ALMedian showed the lowest accuracy with DSC of 0.82 +/- 0.20, 0.81 +/- 0.08, and 0.88 +/- 0.04, respectively. DCNN reached the best performance in terms of accurate derivation of synthetic CT values within each organ, with a mean absolute error within the body contour of 32.7 +/- 7.9 HU, followed by the advanced atlas-based methods (ALWV: 40.5 +/- 8.2 HU, ALWV-Iter: 42.4 +/- 8.1 HU, ALWV-Bone: 44.0 +/- 8.9 HU). ALMedian led to the highest error (52.1 +/- 11.1 HU). Considering the dosimetric evaluation results, ALWV-Iter, ALWV, DCNN and ALWV-Bone led to similar mean dose estimation within each organ at risk and target volume with less than 1% dose discrepancy. However, the two-dimensional gamma analysis demonstrated higher pass rates for ALWV-Bone, DCNN, ALMedian and ALWV-Iter at 1%/1 mm criterion with 94.99 +/- 5.15%, 94.59 +/- 5.65%, 93.68 +/- 5.53% and 93.10 +/- 5.99% success, respectively, while ALWV and water-only resulted in 86.91 +/- 13.50% and 80.77 +/- 12.10%, respectively. Overall, machine learning and advanced atlas-based methods exhibited promising performance by achieving reliable organ segmentation and synthetic CT generation. DCNN appears to have slightly better performance by achieving accurate automated organ segmentation and relatively small dosimetric errors (followed closely by advanced atlas-based methods, which in some cases achieved similar performance). However, the DCNN approach showed higher vulnerability to anatomical variation, where a greater number of outliers was observed with this method. Considering the dosimetric results obtained from the evaluated methods, the challenge of electron density estimation from MR images can be resolved with a clinically tolerable error.

85 citations

Journal ArticleDOI
TL;DR: An automated approach is developed that allows generation of a continuously valued pseudo-CT from a single 18F-FDG non-attenuation-corrected (NAC) PET image and evaluated it in PET/CT brain imaging and provides quantitatively accurate 18F -FDG PET results with average errors of less than 1% in most brain regions.
Abstract: To develop and evaluate the feasibility of a data-driven deep learning approach (deepAC) for positron-emission tomography (PET) image attenuation correction without anatomical imaging. A PET attenuation correction pipeline was developed utilizing deep learning to generate continuously valued pseudo-computed tomography (CT) images from uncorrected 18F-fluorodeoxyglucose (18F-FDG) PET images. A deep convolutional encoder-decoder network was trained to identify tissue contrast in volumetric uncorrected PET images co-registered to CT data. A set of 100 retrospective 3D FDG PET head images was used to train the model. The model was evaluated in another 28 patients by comparing the generated pseudo-CT to the acquired CT using Dice coefficient and mean absolute error (MAE) and finally by comparing reconstructed PET images using the pseudo-CT and acquired CT for attenuation correction. Paired-sample t tests were used for statistical analysis to compare PET reconstruction error using deepAC with CT-based attenuation correction. deepAC produced pseudo-CTs with Dice coefficients of 0.80 ± 0.02 for air, 0.94 ± 0.01 for soft tissue, and 0.75 ± 0.03 for bone and MAE of 111 ± 16 HU relative to the PET/CT dataset. deepAC provides quantitatively accurate 18F-FDG PET results with average errors of less than 1% in most brain regions. We have developed an automated approach (deepAC) that allows generation of a continuously valued pseudo-CT from a single 18F-FDG non-attenuation-corrected (NAC) PET image and evaluated it in PET/CT brain imaging.

83 citations