scispace - formally typeset
Search or ask a question

Showing papers in "Medical Physics in 2023"


Journal ArticleDOI
TL;DR: In this article , a new beam angle optimization (BAO) algorithm, namely angle generation (AG) method, is proposed and demonstrated to provide nearly-exact solutions for BAO in reference to the exhaustive search (ES) solution.
Abstract: BACKGROUND In treatment planning, beam angle optimization (BAO) refers to the selection of a subset with a given number of beam angles from all available angles that provides the best plan quality. BAO is a NP-hard combinatorial problem. Although exhaustive search (ES) can exactly solve BAO by exploring all possible combinations, ES is very time-consuming and practically infeasible. PURPOSE To the best of our knowledge, (1) no optimization method has been demonstrated that can provide the exact solution to BAO, and (2) no study has validated an optimization method for solving BAO by benchmarking with the optimal BAO solution (e.g., via ES), both of which will be addressed by this work. METHODS This work considers BAO for proton therapy, e.g., the selection of 2 to 4 beam angles for IMPT. The optimal BAO solution is obtained via ES and serves as the ground truth. A new BAO algorithm, namely angle generation (AG) method, is proposed, and demonstrated to provide nearly-exact solutions for BAO in reference to the ES solution. AG iteratively optimizes the angular set via group-sparsity (GS) regularization, until the planning objective does not decrease further. RESULTS Since GS alone can also solve BAO, AG was validated and compared with GS for 2-angle brain, 3-angle lung, and 4-angle brain cases, in reference to the optimal BAO solutions obtained by ES: the AG solution had the rank (1/276, 1/2024, 4/10626), while the GS solution had the rank (42/276, 279/2024, 4328/10626). CONCLUSIONS A new BAO algorithm called AG is proposed and shown to provide substantially improved accuracy for BAO from current methods with nearly-exact solutions to BAO, in reference to the ground truth of optimal BAO solution via ES. This article is protected by copyright. All rights reserved.

3 citations


Journal ArticleDOI
TL;DR: In this paper , the authors measured the optical absorption associated with hydrated electrons produced by clinical linacs and assessed the suitability of the technique for radiotherapy (⩽ 1 cGy per pulse) applications.
Abstract: BACKGROUND Hydrated electrons, which are short-lived products of radiolysis in water, increase the optical absorption of water, providing a pathway toward near-tissue-equivalent clinical radiation dosimeters. This has been demonstrated in high-dose-per-pulse radiochemistry research, but, owing to the weak absorption signal, its application in existing low-dose-per-pulse radiotherapy provided by clinical linear accelerators (linacs) has yet to be investigated. PURPOSE The aims of this study were to measure the optical absorption associated with hydrated electrons produced by clinical linacs and to assess the suitability of the technique for radiotherapy (⩽ 1 cGy per pulse) applications. METHODS 40 mW of 660-nm laser light was sent five passes through deionized water contained in a 10 × 4 × $\times 4\times$ 2 cm3 glass-walled cavity by using four broadband dielectric mirrors, two on each side of the cavity. The light was collected with a biased silicon photodetector. The water cavity was then irradiated by a Varian TrueBeam linac with both photon (10 MV FFF, 6 MV FFF, 6 MV) and electron beams (6 MeV) while monitoring the transmitted laser power for absorption transients. Radiochromic EBT3 film measurements were also performed for comparison. RESULTS Examination of the absorbance profiles showed clear absorption changes in the water when radiation pulses were delivered. Both the amplitude and the decay time of the signal appeared consistent with the absorbed dose and the characteristics of the hydrated electrons. By using literature value for the hydrated electron radiation chemical yield (3.0±0.3), we inferred doses of 2.1±0.2 mGy (10 MV FFF), 1.3±0.1 mGy (6 MV FFF), 0.45±0.06 mGy (6 MV) for photons, and 0.47±0.05 mGy (6 MeV) for electrons, which differed from EBT3 film measurements by 0.6%, 0.8%, 10%, and 15.7%, respectively. The half-life of the hydrated electrons in the solution was ∼ 24 μ $\umu$ s. CONCLUSIONS By measuring 660-nm laser light transmitted through a cm-scale, multi-pass water cavity, we observed absorption transients consistent with hydrated electrons generated by clinical linac radiation. The agreement between our inferred dose and EBT3 film measurements suggests this proof-of-concept system represents a viable pathway toward tissue-equivalent dosimeters for clinical radiotherapy applications.

2 citations


Journal ArticleDOI
TL;DR: In this article , the authors evaluate the feasibility and quality of intensity-modulated proton therapy (IMPT) plans generated with four different knowledge-based planning (KBP) pipelines fully integrated into a commercial treatment planning system.
Abstract: PURPOSE Automated treatment planning strategies are being widely implemented in clinical routine to reduce inter-planner variability, speed up the optimization process, and improve plan quality. This study aims to evaluate the feasibility and quality of intensity-modulated proton therapy (IMPT) plans generated with four different knowledge-based planning (KBP) pipelines fully integrated into a commercial treatment planning system (TPS). MATERIALS/METHODS A data set containing 60 oropharyngeal cancer patients was split into 11 folds, each containing 47 patients for training, 5 patients for validation and 5 patients for testing. A dose prediction model was trained on each of the folds, resulting in a total of 11 models. Three patients were left out in order to assess if the differences introduced between models were significant. From voxel-based dose predictions, we analyze the two steps that follow the dose prediction: post-processing of the predicted dose and dose mimicking (DM). We focused on the effect of post-processing (PP) or no post-processing (NPP) combined with two different DM algorithms for optimization: the one available in the commercial TPS RayStation (RSM) and a simpler isodose-based mimicking (IBM). Using 55 test patients (5 test patients for each model), we evaluated the quality and robustness of the plans generated by the four proposed KBP pipelines (PP-RSM, PP-IBM, NPP-RSM, NPP-IBM). After robust evaluation, dose-volume histogram (DVH) metrics in nominal and worst-case scenarios were compared to those of the manually generated plans. RESULTS Nominal doses from the four KBP pipelines showed promising results achieving comparable target coverage and improved dose to organs at risk (OARs) compared to the manual plans. However, too optimistic post-processing applied to the dose prediction (i.e. important decrease of the dose to the organs) compromised the robustness of the plans. Even though RSM seemed to partially compensate for the lack of robustness in the PP plans, still 65% of the patients did not achieve the expected robustness levels. NPP-RSM plans seemed to achieve the best trade-off between robustness and OAR sparing. DISCUSSION/CONCLUSIONS PP and DM strategies are crucial steps to generate acceptable robust and deliverable IMPT plans from ML-predicted doses. Before the clinical implementation of any KBP pipeline, the PP and DM parameters predefined by the commercial TPS need to be modified accordingly with a comprehensive feedback loop in which the robustness of the final dose calculations is evaluated. With the right choice of PP and DM parameters, KBP strategies have the potential to generate IMPT plans within clinically acceptable levels comparable to plans manually generated by dosimetrists. This article is protected by copyright. All rights reserved.

2 citations


Journal ArticleDOI
TL;DR: In this article , the authors investigated and compared the dose-response curves of the Gafchromic EBT4 film for megavoltage and kilovoltage x-ray beams with different dose levels, scanning spatial resolutions, and sizes of region of interest (ROI).
Abstract: BACKGROUND EBT4 was newly released for radiotherapy quality assurance to improve the signal-to-noise ratio in radiochromic film dosimetry. It is important to know its dose-response characteristics before its use in the clinic. PURPOSE This study aims to investigate and compare the dose-response curves of the Gafchromic EBT4 film for megavoltage and kilovoltage x-ray beams with different dose levels, scanning spatial resolutions, and sizes of region of interest (ROI). METHODS EBT4 film (Lot#07052201) calibration strips (3.5×20 cm2 ) were exposed to a 10×10 cm2 open field at doses of 0, 63, 125, 500, 750, 1000 cGy using 6 MV photon beam. EBT4 film strips from the same lot were then exposed to each x-ray beam (6 MV, 6 MV FFF, 10 MV FFF, 15 MV, and 70 kV) at 6 dose values (50, 100, 300, 600, 800, 1000 cGy). A full sheet (25×20 cm2 ) of EBT4 film was irradiated at each energy with 300 cGy for profile comparison with the treatment planning calculation. At two different spatial resolutions of 72 and 300 dpi, each film piece was scanned three consecutive times in the center of an Epson 10000XL flatbed scanner in 48-bit color. The scanned images were analyzed using FilmQA Pro. For each scanned image, an ROI of 2×2 cm2 at the field center was selected to obtain the average pixel value with its standard deviation in the ROI. An additional ROI of 1 cm diameter circle was also used to evaluate the impact of ROI shape and size, especially for FFF beams. The dose value, average dose-response value, and associated uncertainty were determined for each energy and relative responses were analyzed. The Student's t-test was performed to evaluate the statistical significance of the dose-response values with different color channels, ROI shapes, and spatial resolutions. RESULTS The dose-response curves for the five x-ray energies were compared in three color channels. Weak energy dependence was found among the megavoltage beams. No significant differences (average ∼1.1%) were observed for all doses in this study among 6 MV, 6 MV FFF, 10 MV FFF, and 15 MV beams, regardless of spatial resolution and color channel. However, a statistically significant difference in dose-response was observed up to 12% between 70 kV and 6 MV beams. CONCLUSIONS The dose-response curves for Gafchromic EBT4 films were nearly independent of the energy of the photon beams among 6 MV, 6 MV FFF, 10 MV FFF, and 15 MV. For very low-energy photons (e.g., 70 kV), a separate calibration from the same low-energy x-ray is necessary. This article is protected by copyright. All rights reserved.

2 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a dual network structure consisting of the feature refinement network (FRN) and the dynamic perception network (DPN), which extracted features of different levels through residual dense connections and fused the two networks' features to assign weights in different regions and handle CT images' delicate tissues better.
Abstract: Background Since the potential health risks of the radiation generated by computer tomography (CT), concerns have been expressed on reducing the radiation dose. However, low-dose CT (LDCT) images contain complex noise and artifacts, bringing uncertainty to medical diagnosis. Purpose Existing deep learning (DL)-based denoising methods are difficult to fully exploit hierarchical features of different levels, limiting the effect of denoising. Moreover, the standard convolution kernel is parameter sharing and cannot be adjusted dynamically with input change. This paper proposes an LDCT denoising network using high-level feature refinement and multiscale dynamic convolution to mitigate these problems. Methods The dual network structure proposed in this paper consists of the feature refinement network (FRN) and the dynamic perception network (DPN). The FDN extracts features of different levels through residual dense connections. The high-level hierarchical information is transmitted to DPN to improve the low-level representations. In DPN, the two networks' features are fused by local channel attention (LCA) to assign weights in different regions and handle CT images' delicate tissues better. Then, the dynamic dilated convolution (DDC) with multibranch and multiscale receptive fields is proposed to enhance the expression and processing ability of the denoising network. The experiments were trained and tested on the dataset “NIH-AAPM-Mayo Clinic Low-Dose CT Grand Challenge,” consisting of 10 anonymous patients with normal-dose abdominal CT and LDCT at 25% dose. In addition, external validation was performed on the dataset “Low Dose CT Image and Projection Data,” which included 300 chest CT images at 10% dose and 300 head CT images at 25% dose. Results Proposed method compared with seven mainstream LDCT denoising algorithms. On the Mayo dataset, achieved peak signal-to-noise ratio (PSNR): 46.3526 dB (95% CI: 46.0121–46.6931 dB) and structural similarity (SSIM): 0.9844 (95% CI: 0.9834–0.9854). Compared with LDCT, the average increase was 3.4159 dB and 0.0239, respectively. The results are relatively optimal and statistically significant compared with other methods. In external verification, our algorithm can cope well with ultra-low-dose chest CT images at 10% dose and obtain PSNR: 28.6130 (95% CI: 28.1680–29.0580 dB) and SSIM: 0.7201 (95% CI: 0.7101–0.7301). Compared with LDCT, PSNR/SSIM is increased by 3.6536dB and 0.2132, respectively. In addition, the quality of LDCT can also be improved in head CT denoising. Conclusions This paper proposes a DL-based LDCT denoising algorithm, which utilizes high-level features and multiscale dynamic convolution to optimize the network's denoising effect. This method can realize speedy denoising and performs well in noise suppression and detail preservation, which can be helpful for the diagnosis of LDCT.

2 citations


Journal ArticleDOI
TL;DR: In this paper , a progressive convolutional neural network (PCCNN) was proposed to remove the noise from low-dose CT images in latent space by transferring noise from LDCT to NDCT, denoised CT images generated from unpaired CT images and noisy CT images.
Abstract: BACKGROUND Reducing the radiation dose from computed tomography (CT) can significantly reduce the radiation risk to patients. However, low-dose CT (LDCT) suffers from severe and complex noise interference that affects subsequent diagnosis and analysis. Recently, deep learning-based methods have shown superior performance in LDCT image-denoising tasks. However, most methods require many normal-dose and low-dose CT image pairs, which are difficult to obtain in clinical applications. Unsupervised methods, on the other hand, are more general. PURPOSE Deep learning methods based on GAN networks have been widely used for unsupervised LDCT denoising, but the additional memory requirements of the model also hinder its further clinical application. To this end, we propose a simpler multi-stage denoising framework trained using unpaired data, the Progressive Cyclical Convolutional Neural Network (PCCNN), which can remove the noise from CT images in latent space. METHODS Our proposed PCCNN introduces a noise transfer model that transfers noise from LDCT to NDCT, denoised CT images generated from unpaired CT images, and noisy CT images. The denoising framework also contains a progressive module that effectively removes noise through multi-stage wavelet transforms without sacrificing high-frequency components such as edges and details. RESULTS Compared with seven LDCT denoising algorithms, we perform a quantitative and qualitative evaluation of the experimental results and perform ablation experiments on each network module and loss function. On the AAPM dataset, compared with the contrasted unsupervised methods, Our denoising framework has excellent denoising performance increasing the peak signal-to-noise ratio (PSNR) from 29.622 to 30.671, and the structural similarity index (SSIM) was increased from 0.8544 to 0.9199. The PCCNN denoising results were relatively optimal and statistically significant. In the qualitative result comparison, PCCNN without introducing additional blurring and artifacts, the resulting image has higher resolution and complete detail preservation, and the overall structural texture of the image is closer to NDCT. In visual assessments, PCCNN achieves a relatively balanced result in noise suppression, contrast retention, and lesion discrimination. CONCLUSIONS Extensive experimental validation shows that our scheme achieves reconstruction results comparable to supervised learning methods and has performed well in image quality and medical diagnostic acceptability. This article is protected by copyright. All rights reserved.

2 citations


Journal ArticleDOI
TL;DR: In this article , the authors proposed a cooperative labeling method to make use of weakly annotated medical imaging data for the training of a machine learning algorithm for nodule detection in chest CT.
Abstract: PURPOSE Machine learning algorithms are best trained with large quantities of accurately annotated samples. While natural scene images can often be labeled relatively cheaply and at large scale, obtaining accurate annotations for medical images is both time consuming and expensive. In this study, we propose a cooperative labeling method that allows us to make use of weakly annotated medical imaging data for the training of a machine learning algorithm. As most clinically produced data is weakly-annotated - produced for use by humans rather than machines, and lacking information machine learning depends upon - this approach allows us to incorporate a wider range of clinical data and thereby increase the training set size. METHODS Our pseudo-labeling method consists of multiple stages. In the first stage, a previously established network is trained using a limited number of samples with high-quality expert-produced annotations. This network is used to generate annotations for a separate larger dataset that contains only weakly annotated scans. In the second stage, by cross-checking the two types of annotations against each other, we obtain higher-fidelity annotations. In the third stage, we extract training data from the weakly annotated scans, and combine it with the fully annotated data, producing a larger training dataset. We use this larger dataset to develop a computer-aided detection (CADe) system for nodule detection in chest CT. RESULTS We evaluated the proposed approach by presenting the network with different numbers of expert-annotated scans in training and then testing the CADe using an independent expert-annotated dataset. We demonstrate that when availability of expert annotations is severely limited, the inclusion of weakly-labeled data leads to a 5% improvement in the Competitive Performance Metric (CPM), defined as the average of sensitivities at different false positive rates. CONCLUSIONS Our proposed approach can effectively merge a weakly-annotated dataset with a small, well-annotated dataset for algorithm training. This approach can help enlarge limited training data by leveraging the large amount of weakly labeled data typically generated in clinical image interpretation.

2 citations


Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed an LDCT image denoising algorithm based on end-to-end training, which can effectively improve the diagnostic performance of CT images by constraining the details of the images and restoring the DLCT image structure.
Abstract: BACKGROUND Low-dose computed tomography (LDCT) can reduce the dose of X-ray radiation, making it increasingly significant for routine clinical diagnosis and treatment planning. However, the noise introduced by low-dose X-ray exposure degrades the quality of CT images, affecting the accuracy of clinical diagnosis. PURPOSE The noises, artifacts, and high-frequency components are similarly distributed in LDCT images. Transformer can capture global context information in an attentional manner to create distant dependencies on targets and extract more powerful features. In this paper, we reduce the impact of image errors on the ability to retain detailed information and improve the noise suppression performance by fully mining the distribution characteristics of image information. METHODS This paper proposed an LDCT noise and artifact suppressing network based on Swin Transformer. The network includes a noise extraction sub-network and a noise removal sub-network. The noise extraction and removal capability are improved using a coarse extraction network of high-frequency features based on full convolution. The noise removal sub-network improves the network's ability to extract relevant image features by using a Swin Transformer with a shift window as an encoder-decoder and skip connections for global feature fusion. Also, the perceptual field is extended by extracting multi-scale features of the images to recover the spatial resolution of the feature maps. The network uses a loss constraint with a combination of L1 and MS-SSIM to improve and ensure the stability and denoising effect of the network. RESULTS The denoising ability and clinical applicability of the methods were tested using clinical datasets. Compared with DnCNN, RED-CNN, CBDNet and TSCN, the STEDNet method shows a better denoising effect on RMSE and PSNR. The STEDNet method effectively removes image noise and preserves the image structure to the maximum extent, making the reconstructed image closest to the NDCT image. The subjective and objective analysis of several sets of experiments shows that the method in this paper can effectively maintain the structure, edges, and textures of the denoised images while having good noise suppression performance. In the real data evaluation, the RMSE of this method is reduced by 18.82%, 15.15%, 2.25%, and 1.10% on average compared with DnCNN, RED-CNN, CBDNet, and TSCNN, respectively. The average improvement of PSNR is 9.53%, 7.33%, 2.65%, and 3.69%, respectively. CONCLUSIONS This paper proposed a LDCT image denoising algorithm based on end-to-end training. The method in this paper can effectively improve the diagnostic performance of CT images by constraining the details of the images and restoring the LDCT image structure. The problem of increased noise and artifacts in CT images can be solved while maintaining the integrity of CT image tissue structure and pathological information. Compared with other algorithms, this method has better denoising effects both quantitatively and qualitatively. This article is protected by copyright. All rights reserved.

2 citations


Journal ArticleDOI
TL;DR: In this paper , a 2D strip ionization chamber array (SICA) with high spatiotemporal resolution was used to measure spot dwell times under various beam currents and to quantify dose rates for various field sizes.
Abstract: BACKGROUND The potential reduction of normal tissue toxicities during FLASH radiotherapy (FLASH-RT) has inspired many efforts to investigate its underlying mechanism and to translate it into the clinic. Such investigations require experimental platforms of FLASH-RT capabilities. PURPOSE To commission and characterize a 250 MeV proton research beamline with a saturated nozzle monitor ionization chamber for proton FLASH-RT small animal experiments. METHODS A 2D strip ionization chamber array (SICA) with high spatiotemporal resolution was used to measure spot dwell times under various beam currents and to quantify dose rates for various field sizes. An Advanced Markus chamber and a Faraday cup were irradiated with spot-scanned uniform fields and nozzle currents from 50 nA to 215 nA to investigate dose scaling relations. The SICA detector was set up upstream to establish a correlation between SICA signal and delivered dose at isocenter to serve as an in vivo dosimeter and monitor the delivered dose rate. Two off-the-shelf brass blocks were used as apertures to shape the dose laterally. Dose profiles in 2D were measured with an amorphous silicon detector array at a low current of 2 nA and validated with Gafchromic films EBT-XD at high currents of up to 215 nA. RESULTS Spot dwell times become asymptotically constant as a function of the requested beam current at the nozzle of greater than 30 nA due to the saturation of monitor ionization chamber (MIC). With a saturated nozzle MIC, the delivered dose is always greater than the planned dose, but the desired dose can be achieved by scaling the MU of the field. The delivered doses exhibit excellent linearity with R 2 > 0.99 ${R^2} > 0.99$ with respect to MU, beam current, and the product of MU and beam current. If the total number of spots is less than 100 at a nozzle current of 215 nA, a field-averaged dose rate greater than 40 Gy/s can be achieved. The SICA-based in vivo dosimetry system achieved excellent estimates of the delivered dose with an average (maximum) deviation of 0.02 Gy (0.05 Gy) over a range of delivered doses from 3 Gy to 44 Gy. Using brass aperture blocks reduced the 80%-20% penumbra by 64% from 7.55 mm to 2.75 mm. The 2D dose profiles measured by the Phoenix detector at 2 nA and the EBT-XD film at 215 nA showed great agreement, with a gamma passing rate of 95.99% using 1mm/2% criterion. CONCLUSIONS A 250 MeV proton research beamline was successfully commissioned and characterized. Challenges due to the saturated monitor ionization chamber were mitigated by scaling MU and using an in vivo dosimetry system. A simple aperture system was designed and validated to provide sharp dose fall-off for small animal experiments. This experience can serve as a foundation for other centers interested in implementing FLASH radiotherapy preclinical research, especially those equipped with a similar saturated MIC. This article is protected by copyright. All rights reserved.

2 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors analyzed the importance of extracted CT radiomics features and developed the model with good generalization performance for precisely distinguishing major NSCLC subtypes: adenocarcinoma (ADC) and squamous cell carcinoma (SCC).
Abstract: PURPOSE Classifying the subtypes of non-small cell lung cancer (NSCLC) is essential for clinically adopting optimal treatment strategies and improving clinical outcomes, but the histological subtypes are confirmed by invasive biopsy or post-operative examination at present. Based on multi-center data, this study aimed to analyze the importance of extracted CT radiomics features and develop the model with good generalization performance for precisely distinguishing major NSCLC subtypes: adenocarcinoma (ADC) and squamous cell carcinoma (SCC). METHODS We collected a multi-center CT dataset with 868 patients from 8 international databases on the cancer imaging archive (TCIA). Among them, patients from 5 databases were mixed and split to training and test sets (560:140). The remaining 3 databases were used as independent test sets: TCGA set (n = 97) and lung3 set (n = 71). A total of 1409 features containing shape, intensity, and texture information were extracted from tumor volume of interest (VOI), then the ℓ2,1 -norm minimization was used for feature selection and the importance of selected features was analyzed. Next, the prediction and generalization performance of 130 radiomics models (10 common algorithms and 120 heterogeneous ensemble combinations) were compared by the average AUC value on three test sets. Finally, predictive results of the optimal model were shown. RESULTS After feature selection, 401 features were obtained. Features of intensity, texture GLCM, GLRLM and GLSZM had higher classification weight coefficients than other features (shape, texture GLDM and NGTDM), and the filtered image features exhibited significant importance than original image features (p-value = 0.0210). Moreover, 5 ensemble learning algorithms (Bagging, AdaBoost, RF, XGBoost, GBDT) had better generalization performance (p-value = 0.00418) than other non-ensemble algorithms (MLP, LR, GNB, SVM, KNN). The Bagging-AdaBoost-SVM model had the highest AUC value (0.815±0.010) on three test sets. It obtained AUC values of 0.819, 0.823 and 0.804 on test set, TCGA set and lung3 set, respectively. CONCLUSION Our multi-dataset study showed that intensity features, texture features (GLCM, GLRLM and GLSZM) and filtered image features were more important for distinguishing ADCs from SCCs. The method of ensemble learning can improve the prediction and generalization performance on the complicated multi-center data. The Bagging-AdaBoost-SVM model had the strongest generalization performance, and it showed promising clinical value for non-invasively predicting the histopathological subtypes of NSCLC. This article is protected by copyright. All rights reserved.

2 citations


Journal ArticleDOI
TL;DR: In this paper , a neural network-based BNCT dose prediction method is proposed to achieve the rapid and accurate acquisition of 3D therapeutic dose distribution for patients with glioblastoma.
Abstract: BACKGROUND Boron neutron capture therapy (BNCT) is a binary radiotherapy based on the 10 B(n, α)7 Li capture reaction. Nonradioactive isotope 10 B atoms which selectively concentrated in tumor cells will react with low energy neutrons (mainly thermal neutrons) to produce secondary particles with high linear energy transfer, thus depositing dose in tumor cells. In clinical practice, an appropriate treatment plan needs to be set on the basis of the treatment planning system (TPS). Existing BNCT TPSs usually use the Monte Carlo method to determine the three-dimensional (3D) therapeutic dose distribution, which often requires a lot of calculation time due to the complexity of simulating neutron transportation. PURPOSE A neural network-based BNCT dose prediction method is proposed to achieve the rapid and accurate acquisition of BNCT 3D therapeutic dose distribution for patients with glioblastoma to solve the time-consuming problem of BNCT dose calculation in clinic. METHODS The clinical data of 122 patients with glioblastoma are collected. Eighteen patients are used as a test set, and the rest are used as a training set. The 3D-UNET is constructed through the design optimization of input and output data sets based on radiation field information and patient CT information to enable the prediction of 3D dose distribution of BNCT. RESULTS The average mean absolute error of the predicted and simulated equivalent doses of each organ are all less than 1 Gy. For the dose to 95% of the GTV volume (D95 ), the relative deviation between predicted and simulated results are all less than 2%. The average 2 mm/2% gamma index is 89.67%, and the average 3 mm/3% gamma index is 96.78%. The calculation takes about 6 h to simulate the 3D therapeutic dose distribution of a patient with glioblastoma by Monte Carlo method using Intel Xeon E5-2699 v4, whereas the time required by the method proposed in this study is almost less than 1 second using a Titan-V graphics card. CONCLUSIONS This study proposes a 3D dose prediction method based on 3D-UNET architecture in BNCT, and the feasibility of this method is demonstrated. Results indicate that the method can remarkably reduce the time required for calculation and ensure the accuracy of the predicted 3D therapeutic dose-effect. This work is expected to promote the clinical development of BNCT in the future. This article is protected by copyright. All rights reserved.

Journal ArticleDOI
TL;DR: In this paper , the authors provide an overview of the technological development and clinical trials evolvement in the past 25 years for hypofractionation and SABR, with an outlook to the future improvement.
Abstract: As we were invited to write an article for celebrating the 50th Anniversary of Medical Physics journal, on something historically significant, commemorative, and exciting happening in the past decades, the first idea came to our mind is the fascinating radiotherapy paradigm shift from conventional fractionation to hypofractionation and stereotactic ablative radiotherapy (SABR). It is historically and clinically significant since as we all know this RT treatment revolution not only reduces treatment duration for patients, but also improves tumor control and cancer treatment outcomes. It is also commemorative and exciting for us medical physicists since the technology development in medical physics has been the main driver for the success of this treatment regimen which requires high precision and accuracy throughout the entire treatment planning and delivery. This articles provides an overview of the technological development and clinical trials evolvement in the past 25 years for hypofractionation and SABR, with an outlook to the future improvement. This article is protected by copyright. All rights reserved.

Journal ArticleDOI
TL;DR: In this article , the authors presented the typical morphological features and standard dose values according to the breast size acquired from a large patient cohort, and established radiation dose estimation models allowing accurate estimation of dose values including MGD with an acceptable RSE.
Abstract: BACKGROUND Spiral breast computed tomography (BCT) equipped with a photon-counting detector is a new radiological modality allowing for the compression-free acquisition of high-resolution 3-D datasets of the breast. Optimized dose exposu04170/re setups according to breast size were previously proposed but could not effectively be applied in a clinical environment due to ambiguity in measuring breast size. PURPOSE This study aims to report the standard radiation dose values in a large cohort of patients examined with BCT, and to provide a mathematical model to estimate radiation dose based on morphological features of the breast. METHODS This retrospective study was conducted on 1,657 BCT examinations acquired between 2018 and 2021 from 829 participants (57±10 years, all female). Applying a dedicated breast tissue segmentation algorithm and Monte Carlo simulation, mean absorbed dose (MAD), mean glandular dose (MGD), mean skin dose (MSD), maximum glandular dose (maxGD), and maximum skin dose (maxSD) were calculated and related to morphological features such as breast volume, effective diameter, breast length, skin volume, and glandularity. Effective dose (ED) was calculated by applying the corresponding beam and tissue weighting factors, 1 Sv/Gy and 0.12 per breast. Relevant morphological features predicting dose values were identified based on the Spearman's rank correlation coefficient. Exponential or bi-exponential models predicting the dose values as a function of morphological features were fitted by using a non-linear least squares method. The models were validated by assessing R2 and residual standard error (RSE). RESULTS The most relevant morphological features for radiation dose estimation were the breast volume (correlation coefficient: -0.8), diameter (-0.7), and length (-0.6). The glandularity presented a weak-positive correlation (0.4) with MGD and maxGD due to the inhomogeneous distribution of the glandularity and absorbed dose in the 3-D breast volume. The standard MGDs were calculated to be 7.3±0.7, 6.5±0.3, and 5.9±0.3 mGy, MADs to 7.6±0.8, 6.8±0.3, and 6.2±0.3 mGy, maxSDs to 19.9±1.6, 19.5±0.5, and 18.9±0.5 mGy, and EDs to 0.88±0.08, 0.78±0.04, and 0.72±0.04 mSv for small, medium, and large breasts with average breast lengths of 5.9±1.6, 8.7±1.3, and 12.2±2.0 cm, respectively. The estimated glandularity - 23.1±16.9, 12.5±11.4, and 6.9±7.3 % from small to large breasts. The mathematical models were able to estimate the MAD, MGD, MSD, and maxSD as a function of each morphological feature with only upto 0.5 mGy RSE. CONCLUSION We presented the typical morphological features and standard dose values according to the breast size acquired from a large patient cohort. We established radiation dose estimation models allowing accurate estimation of dose values including MGD with an acceptable RSE based on each of the easily measured morphological features of the breast. Clinicians could use the breast length to operate as a dosimetric alert of the scanner prior to a BCT scan. Radiation exposure for BCT was lower than diagnostic mammography and cone-beam breast CT. This article is protected by copyright. All rights reserved.

Journal ArticleDOI
TL;DR: In this paper , the potential of pMBRT for treating clinical indications candidates for the first clinical trials (i.e., brain, lung, and liver metastases) was evaluated.
Abstract: BACKGROUND Proton minibeam radiation therapy (pMBRT) is a new radiotherapy approach that has shown a significant increase in the therapeutic window in glioma-bearing rats compared to conventional proton therapy. Such preclinical results encourage the preparation of clinical trials. PURPOSE In this study, the potential of pMBRT for treating clinical indications candidates for the first clinical trials (i.e., brain, lung, and liver metastases) was evaluated. METHODS Four clinical cases, initially treated with stereotactic radiotherapy (SRT), were selected for this study. pMBRT, SRT, and conventional proton therapy (PT) dose distributions were compared by using three main criteria: (i) the tumor coverage, (ii) the mean dose to organs-at-risk, and (iii) the possible adverse effects in normal tissues by considering valley doses as the responsible for tissue sparing. pMBRT plans consisted of one fraction and 1-2 fields. Dose calculations were computed by means of Monte Carlo simulations. RESULTS pMBRT treatments provide a similar or superior target coverage than SRT, even using fewer fields. pMBRT also significantly reduces the biologically effective dose (BED) to organs-at-risk. In addition, valley and mean doses to normal tissues remain below tolerance limits when treatments are delivered in a single fraction, contrary to PT treatments. CONCLUSIONS This work provides a first insight into the possibility of treating metastases with pMBRT. More favorable dose distributions and treatment delivery regimes may be expected from this new approach than SRT. The advantages of pMBRT would need to be confirmed by means of Phase I clinical trials. This article is protected by copyright. All rights reserved.

Journal ArticleDOI
TL;DR: In this paper , the authors provide guidance on quality control of C-arm Cone Beam Computed Tomography (C-arm CBCT) systems with this volumetric imaging capability.
Abstract: This report reviews the image acquisition and reconstruction characteristics of C-arm Cone Beam Computed Tomography (C-arm CBCT) systems and provides guidance on quality control of C-arm systems with this volumetric imaging capability. The concepts of 3D image reconstruction, geometric calibration, image quality, and dosimetry covered in this report are also pertinent to CBCT for Image-Guided Radiation Therapy (IGRT). However, IGRT systems introduce a number of additional considerations, such as geometric alignment of the imaging at treatment isocenter, which are beyond the scope of the charge to the task group and the report. Section 1 provides an introduction to C-arm CBCT systems and reviews a variety of clinical applications. Section 2 briefly presents nomenclature specific or unique to these systems. A short review of C-arm fluoroscopy quality control (QC) in relation to 3D C-arm imaging is given in Section 3. Section 4 discusses system calibration, including geometric calibration and uniformity calibration. A review of the unique approaches and challenges to 3D reconstruction of data sets acquired by C-arm CBCT systems is give in Section 5. Sections 6 and 7 go in greater depth to address the performance assessment of C-arm CBCT units. First, Section 6 describes testing approaches and phantoms that may be used to evaluate image quality (spatial resolution and image noise and artifacts) and identifies several factors that affect image quality. Section 7 describes both free-in-air and in-phantom approaches to evaluating radiation dose indices. The methodologies described for assessing image quality and radiation dose may be used for annual constancy assessment and comparisons among different systems to help medical physicists determine when a system is not operating as expected. Baseline measurements taken either at installation or after a full preventative maintenance service call can also provide valuable data to help determine whether the performance of the system is acceptable. Collecting image quality and radiation dose data on existing phantoms used for CT image quality and radiation dose assessment, or on newly developed phantoms, will inform the development of performance criteria and standards. Phantom images are also useful for identifying and evaluating artifacts. In particular, comparing baseline data with those from current phantom images can reveal the need for system calibration before image artifacts are detected in clinical practice. Examples of artifacts are provided in sections 4, 5, and 6. This article is protected by copyright. All rights reserved.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a new semi-supervised medical image segmentation network (DRS-Net) based on a dual-regularization scheme to solve the overfitting problem.
Abstract: BACKGROUND Semi-supervised learning is becoming an effective solution for medical image segmentation because of the lack of a large amount of labeled data. PURPOSE Consistency-based strategy is widely used in semi-supervised learning. However, it is still a challenging problem because of the coupling of CNN-based isomorphic models. In this study, we propose a new semi-supervised medical image segmentation network (DRS-Net) based on a dual-regularization scheme to address this challenge. METHODS The proposed model consists of a CNN and a multi-decoder hybrid Transformer, which adopts two regularization schemes to extract more generalized representations for unlabeled data. Considering the difference in learning paradigm, we introduce the cross-guidance between CNN and hybrid Transformer, which uses the pseudo label output from one model to supervise the other model better to excavate valid representations from unlabeled data. In addition, we use feature-level consistency regularization to effectively improve the feature extraction performance. We apply different perturbations to the feature maps output from the hybrid Transformer encoder and keep an invariance of the predictions to enhance the encoder's representations. RESULTS We have extensively evaluated our approach on three typical medical image datasets, including CT slices from Spleen, MRI slices from the Heart, and FM Nuclei. We compare DRS-Net with state-of-the-art methods, and experiment results show that DRS-Net performs better on the Spleen dataset, where the dice similarity coefficient increased by about 3.5%. The experimental results on the Heart and Nuclei datasets show that DRS-Net also improves the segmentation effect of the two datasets. CONCLUSIONS The proposed DRS-Net enhances the segmentation performance of the datasets with three different medical modalities, where the dual-regularization scheme extracts more generalized representations and solves the overfitting problem. This article is protected by copyright. All rights reserved.

Journal ArticleDOI
TL;DR: In this article , the performance of PET and CT image fusion for gross tumor volume (GTV) segmentations of head and neck cancers (HNCs) utilizing conventional, deep learning (DL), and output-level voting-based fusions was evaluated.
Abstract: BACKGROUND PET/CT images combining anatomic and metabolic data provide complementary information that can improve clinical task performance. PET image segmentation algorithms exploiting the multi-modal information available are still lacking. PURPOSE Our study aimed to assess the performance of PET and CT image fusion for gross tumor volume (GTV) segmentations of head and neck cancers (HNCs) utilizing conventional, deep learning (DL), and output-level voting-based fusions. METHODS The current study is based on a total of 328 histologically confirmed HNCs from six different centers. The images were automatically cropped to a 200 × 200 head and neck region box, and CT and PET images were normalized for further processing. Eighteen conventional image-level fusions were implemented. In addition, a modified U2-Net architecture as DL fusion model baseline was used. Three different input, layer, and decision-level information fusions were used. Simultaneous truth and performance level estimation (STAPLE) and majority voting to merge different segmentation outputs (from PET and image-level and network-level fusions), that is, output-level information fusion (voting-based fusions) were employed. Different networks were trained in a 2D manner with a batch size of 64. Twenty percent of the dataset with stratification concerning the centers (20% in each center) were used for final result reporting. Different standard segmentation metrics and conventional PET metrics, such as SUV, were calculated. RESULTS In single modalities, PET had a reasonable performance with a Dice score of 0.77 ± 0.09, while CT did not perform acceptably and reached a Dice score of only 0.38 ± 0.22. Conventional fusion algorithms obtained a Dice score range of [0.76-0.81] with guided-filter-based context enhancement (GFCE) at the low-end, and anisotropic diffusion and Karhunen-Loeve transform fusion (ADF), multi-resolution singular value decomposition (MSVD), and multi-level image decomposition based on latent low-rank representation (MDLatLRR) at the high-end. All DL fusion models achieved Dice scores of 0.80. Output-level voting-based models outperformed all other models, achieving superior results with a Dice score of 0.84 for Majority_ImgFus, Majority_All, and Majority_Fast. A mean error of almost zero was achieved for all fusions using SUVpeak , SUVmean and SUVmedian . CONCLUSION PET/CT information fusion adds significant value to segmentation tasks, considerably outperforming PET-only and CT-only methods. In addition, both conventional image-level and DL fusions achieve competitive results. Meanwhile, output-level voting-based fusion using majority voting of several algorithms results in statistically significant improvements in the segmentation of HNC.

Journal ArticleDOI
TL;DR: In this paper , the authors used virtual clinical trials (VCTs) to evaluate DL-based methods for denoising myocardial perfusion SPECT (MPS) images.
Abstract: BACKGROUND Artificial intelligence-based methods have generated substantial interest in nuclear medicine. An area of significant interest has been the use of deep-learning (DL)-based approaches for denoising images acquired with lower doses, shorter acquisition times, or both. Objective evaluation of these approaches is essential for clinical application. PURPOSE DL-based approaches for denoising nuclear-medicine images have typically been evaluated using fidelity-based figures of merit (FoMs) such as root mean squared error (RMSE) and structural similarity index measure (SSIM). However, these images are acquired for clinical tasks and thus should be evaluated based on their performance in these tasks. Our objectives were to: (1) investigate whether evaluation with these FoMs is consistent with objective clinical-task-based evaluation; (2) provide a theoretical analysis for determining the impact of denoising on signal-detection tasks; and (3) demonstrate the utility of virtual clinical trials (VCTs) to evaluate DL-based methods. METHODS A VCT to evaluate a DL-based method for denoising myocardial perfusion SPECT (MPS) images was conducted. To conduct this evaluation study, we followed the recently published best practices for evaluation of AI algorithms for nuclear medicine (the RELAINCE guidelines). An anthropomorphic patient population modeling clinically relevant variability was simulated. Projection data for this patient population at normal and low-dose count levels (20%, 15%, 10%, 5%) were generated using well-validated Monte Carlo-based simulations. The images were reconstructed using a 3-D ordered-subsets expectation maximization-based approach. Next, the low-dose images were denoised using a commonly used convolutional neural network-based approach. The impact of DL-based denoising was evaluated using both fidelity-based FoMs and area under the receiver operating characteristics curve (AUC), which quantified performance on the clinical task of detecting perfusion defects in MPS images as obtained using a model observer with anthropomorphic channels. We then provide a mathematical treatment to probe the impact of post-processing operations on signal-detection tasks and use this treatment to analyze the findings of this study. RESULTS Based on fidelity-based FoMs, denoising using the considered DL-based method led to significantly superior performance. However, based on ROC analysis, denoising did not improve, and in fact, often degraded detection-task performance. This discordance between fidelity-based FoMs and task-based evaluation was observed at all the low-dose levels and for different cardiac-defect types. Our theoretical analysis revealed that the major reason for this degraded performance was that the denoising method reduced the difference in the means of the reconstructed images and the channel operator-extracted feature vectors between the defect-absent and defect-present cases. CONCLUSIONS The results show the discrepancy between the evaluation of DL-based methods with fidelity-based metrics vs. the evaluation on clinical tasks. This motivates the need for objective task-based evaluation of DL-based denoising approaches. Further, this study shows how VCTs provide a mechanism to conduct such evaluations computationally, in a time and resource-efficient setting, and avoid risks such as radiation dose to the patient. Finally, our theoretical treatment reveals insights into the reasons for the limited performance of the denoising approach and may be used to probe the effect of other post-processing operations on signal-detection tasks. This article is protected by copyright. All rights reserved.


Journal ArticleDOI
TL;DR: In this article , a computational patient phantom model was generated from a clinical multi-catheter 192 Ir HDR breast brachytherapy case and the model was imported into two commercial treatment planning systems (TPSs) currently incorporating an MBDCA.
Abstract: PURPOSE To provide the first clinical test case for commissioning of 192 Ir brachytherapy model-based dose calculation algorithms (MBDCAs) according to the AAPM TG-186 report workflow. ACQUISITION AND VALIDATION METHODS A computational patient phantom model was generated from a clinical multi-catheter 192 Ir HDR breast brachytherapy case. Regions of interest (ROIs) were contoured and digitized on the patient CT images and the model was written to a series of DICOM CT images using MATLAB. The model was imported into two commercial treatment planning systems (TPSs) currently incorporating an MBDCA. Identical treatment plans were prepared using a generic 192 Ir HDR source and the TG-43-based algorithm of each TPS. This was followed by dose to medium in medium calculations using the MBDCA option of each TPS. Monte Carlo (MC) simulation was performed in the model using three different codes and information parsed from the treatment plan exported in DICOM radiation therapy (RT) format. Results were found to agree within statistical uncertainty and the dataset with the lowest uncertainty was assigned as the reference MC dose distribution. DATA FORMAT AND USAGE NOTES The dataset is available online at http://irochouston.mdanderson.org/rpc/BrachySeeds/BrachySeeds/index.html,https://doi.org/10.52519/00005. Files include the treatment plan for each TPS in DICOM RT format, reference MC dose data in RT Dose format, as well as a guide for database users and all files necessary to repeat the MC simulations. POTENTIAL APPLICATIONS The dataset facilitates the commissioning of brachytherapy MBDCAs using TPS embedded tools and establishes a methodology for the development of future clinical test cases. It is also useful to non-MBDCA adopters for intercomparing MBDCAs and exploring their benefits and limitations, as well as to brachytherapy researchers in need of a dosimetric and/or a DICOM RT information parsing benchmark. Limitations include specificity in terms of radionuclide, source model, clinical scenario, and MBDCA version used for its preparation.

Journal ArticleDOI
TL;DR: In this article , a dedicated applicator to hold brass aperture for a proton SRS system was designed and the mechanical precision of the system was tested using a metal ball and film for 11 combinations of gantry and couch angles.
Abstract: BACKGROUND Mechanical accuracy should be verified before implementing a proton stereotactic radiosurgery (SRS) program. Linear accelerator (Linac)-based SRS systems often use electronic portal imaging devices (EPIDs) to verify beam isocentricity. Because proton therapy systems do not have EPID, beam isocentricity tests of proton SRS may still rely on films, which are not efficient. PURPOSE To validate that our proton SRS system meets mechanical precision requirements and to present an efficient method to evaluate the couch and gantry's rotational isocentricity for our proton SRS system. METHODS A dedicated applicator to hold brass aperture for proton SRS system was designed. The mechanical precision of the system was tested using a metal ball and film for 11 combinations of gantry and couch angles. A more efficient quality assurance (QA) procedure was developed, which used a scintillator device to replace the film. The couch rotational isocentricity tests were performed using orthogonal kV x-rays with the couch rotated isocentrically to 5 positions (0°, 315°, 270°, 225°, and 180°). At each couch position, the distance between the metal ball in kV images and the imaging isocenter was measured. The gantry isocentricity tests were performed using a cone-shaped scintillator and proton beams at 5 gantry angles (0°, 45°, 90°, 135°, and 180°), and the isocenter position and the distance of each beam path to the isocenter were obtained. Daily QA procedure was performed for one month to test the robustness and reproducibility of the procedure. RESULTS The gantry and couch rotational isocentricity exhibited sub-mm precision, with most measurements within ±0.5 mm. The one-month QA results showed that the procedure was robust and highly reproducible to within ±0.2 mm. The gantry isocentricity test using the cone-shaped scintillator was accurate and sensitive to variations of ±0.2 mm. The QA procedure was efficient enough to be completed within 30 minutes. The one-month isocentricity position variations were within 0.5 mm, which demonstrating that the overall proton SRS system was stable and precise. CONCLUSION The proton SRS Winston-Lutz QA procedure using a cone-shaped scintillator was efficient and robust. We were able to verify radiation delivery could be performed with sub-mm mechanical precision. This article is protected by copyright. All rights reserved.

Journal ArticleDOI
TL;DR: In this article , the authors evaluated the role and accuracy of the machine-specific reference correction factors (k Q m s r , Q 0 f m sr , q 0 fm s r, f r e f $k{Q}_{msr},\ {Q}_0}^{{f}_msr,{f}_{ref}}\ $ values in determining the absorbed dose rates to water in the reference dosimetry of Gamma Knife.
Abstract: BACKGROUND The machine-specific reference correction factors ( k Q m s r , Q 0 f m s r , f r e f $k_{{Q}_{msr},\ {Q}_0}^{{f}_{msr},{f}_{ref}}$ ) were introduced in International Atomic Energy Agency (IAEA) Technical Report Series 483 (TRS-483) for reference dosimetry of small fields. Several correction factor sets exist for a Leksell Gamma Knife® (GK) PerfexionTM or IconTM . Nevertheless, experiments have not rigorously validated the correction factors from different studies. PURPOSE This study aimed to assess the role and accuracy of k Q m s r , Q 0 f m s r , f r e f $k_{{Q}_{msr},\ {Q}_0}^{{f}_{msr},{f}_{ref}}\ $ values in determining the absorbed dose rates to water in the reference dosimetry of Gamma Knife. METHODS The dose rates in the 16 mm collimator field of a GK were determined following the international code of practices with three ionization chambers: PTW T31010, PTW T31016 (PTW Freiberg GmbH, New York, NY, USA), and Exradin A16 (Standard Imaging, Inc., Middleton, WI, USA). A chamber was placed at the center of a solid water phantom (Elekta AB, Stockholm, Sweden) using a detector-specific insert. The reference point of the ionization chamber was confirmed using cone-beam CT images. Consistency checks were repeated five times at a GK site and performed once at seven GK sites. Correction factors from six simulations reported in previous studies were employed. Variations in the dose rates and relative dose rates before and after applying the k Q m s r , Q 0 f m s r , f r e f $k_{{Q}_{msr},\ {Q}_0}^{{f}_{msr},{f}_{ref}}\ $ were statistically compared. RESULTS The standard deviation of the dose rates measured by the three chambers decreased significantly after any correction method was applied (p = 0.000). When the correction factors of all studies were averaged, the standard deviation was reduced significantly more than when any single correction method was applied (p ≤ 0.030), except for the IAEA TRS-483 correction factors (p = 0.148). Before any correction was applied, there were statistically significant differences among the relative dose rates measured by the three chambers (p = 0.000). None of the single correction methods could remove the differences among the ionization chambers (p ≤ 0.038). After TRS-483 correction, the dose rate of Exradin A16 differed from those of the other two chambers (p ≤ 0.025). After the averaged factors were applied, there were no statistically significant differences between any pairs of chambers according to Scheffe's post hoc analyses (p ≥ 0.051); however, PTW T31010 differed from PTW 31016 according to Tukey's HSD analyses (p = 0.040). CONCLUSION The k Q m s r , Q 0 f m s r , f r e f $k_{{Q}_{msr},\ {Q}_0}^{{f}_{msr},{f}_{ref}}\ $ significantly reduced variations in the dose rates measured by the three ionization chambers. The mean correction factors of the six simulations produced the most consistent results, but this finding was not explicitly proven in the statistical analyses. This article is protected by copyright. All rights reserved.

Journal ArticleDOI
TL;DR: Yu et al. as mentioned in this paper proposed a pulmonary artery segmentation network (PA-Net) to segment the pulmonary artery region from 2D CT images, which used reverse attention and edge attention to enhance the expression ability of the boundary.
Abstract: BACKGROUND Pulmonary embolism is a kind of cardiovascular disease that threatens human life and health. Since pulmonary embolism exists in pulmonary artery, improving the segmentation accuracy of pulmonary artery is the key to the diagnosis of pulmonary embolism. Traditional medical image segmentation methods have limited effectiveness in pulmonary artery segmentation. In recent years, deep learning methods have been gradually adopted to solve complex problems in the field of medical image segmentation. PURPOSE Due to the irregular shape of the pulmonary artery and the adjacent-complex tissues, the accuracy of the existing pulmonary artery segmentation methods based on deep learning need to be improved. Therefore, the purpose of this paper is to develop a segmentation network, which can obtain higher segmentation accuracy and further improve the diagnosis effect. METHODS In this study, the pulmonary artery segmentation performance from the network model and loss function is improved, proposing a pulmonary artery segmentation network (PA-Net) to segment the pulmonary artery region from 2D CT images. Reverse Attention and edge attention are used to enhance the expression ability of the boundary. In addition, to better use feature information, the channel attention module is introduced in the decoder to highlight the important channel features and suppress the unimportant channels. Due to blurred boundaries, pixels near the boundaries of the pulmonary artery may be difficult to segment. Therefore, a new contour loss function based on the active contour model is proposed in this study to segment the target region by assigning dynamic weights to false positive and false negative regions and accurately predict the boundary structure. RESULTS The experimental results show that the segmentation accuracy of this proposed method is significantly improved in comparison with state-of-the-art segmentation methods, and the Dice coefficient is 0.938±0.035, which is also confirmed from the 3D reconstruction results. CONCLUSIONS Our proposed method can accurately segment pulmonary artery structure. This new development will provide the possibility for further rapid diagnosis of pulmonary artery diseases such as pulmonary embolism. Code is available at https://github.com/Yuanyan19/PA-Net. This article is protected by copyright. All rights reserved.

Journal ArticleDOI
TL;DR: In this paper , a Monte Carlo (MC) based continuous aperture optimization (MCCAO) algorithm was proposed for volumetric modulated arc therapy (VMAT), including applications to VMAT on MR-linacs and trajectory based VMAT.
Abstract: BACKGROUND Currently, the commercial treatment planning systems for magnetic-resonance guided linear accelerators (MR-linacs) only support step-and-shoot intensity-modulated radiation therapy (IMRT). However, recent studies have shown the feasibility of delivering arc therapy on MR-linacs, which is expected to improve dose distributions and delivery speed. By accurately accounting for the electron return effect in the presence of a magnetic field, a Monte Carlo (MC) algorithm is ideally suited for the inverse treatment planning of this technique. PURPOSE We propose a novel MC-based continuous aperture optimization (MCCAO) algorithm for volumetric modulated arc therapy (VMAT), including applications to VMAT on MR-linacs and trajectory-based VMAT. A unique feature of MCCAO is that the continuous character of gantry rotation and multi-leaf collimator (MLC) motion is accounted for at every stage of the optimization. METHODS The optimization process uses a multi-stage simulation of 4D dose distribution. A phase space is scored at the top surface of the MLC and the energy deposition of each particle history is mapped to its position in this phase space. A progressive sampling method is used, where both MLC leaf positions and monitor unit (MU) weights are randomly changed, while respecting the linac mechanical limits. Due to the continuous nature of the leaf motion, such changes affect not only a single control point, but propagate to the adjacent ones as well, and the corresponding dose distribution changes are accounted for. A dose-volume cost function is used, which includes the MC statistical uncertainty. RESULTS We applied our optimization technique to various treatment sites, using standard and flattening-filter-free (FFF) 6 MV beam models, with and without a 1.5 T magnetic field. MCCAO generates deliverable plans, whose dose distributions are in good agreement with measurements on ArcCHECK and stereotactic radiosurgery End-To-End Phantom. CONCLUSIONS We show that the novel MCCAO method generates VMAT plans that meet clinical objectives for both conventional and MR-linacs.

Journal ArticleDOI
Yang Yang, Xiaoqin Li, Jipeng Fu, Zhenbo Han, Bin Gao 
TL;DR: Wang et al. as mentioned in this paper proposed a three-dimensional multi-view convolutional neural network (3D MVCNN) framework and embeds the squeeze-and-excitation (SE) module in it to further address the variability of each view in the multiview framework.
Abstract: PURPOSE Early screening is crucial to improve the survival rate and recovery rate of lung cancer patients. Computer-aided diagnosis system (CAD) is a powerful tool to assist clinicians in early diagnosis. Lung nodules are characterized by spatial heterogeneity. However, many attempts use the two-dimensional multi-view framework to learn and simply integrate multiple view features. These methods suffer from the problems of not capturing the spatial characteristics effectively and ignoring the variability of multiple views. In this paper, we propose a three-dimensional multi-view convolutional neural network (3D MVCNN) framework and embed the squeeze-and-excitation (SE) module in it to further address the variability of each view in the multi-view framework. METHODS First, the 3D multiple view samples of lung nodules are extracted by the spatial sampling method, and a 3D CNN is established to extract 3D abstract features. Second, build a 3D MVCNN framework according to the 3D multiple view samples and 3D CNN. This framework can learn more features of different views of lung nodules, taking into account the characteristics of spatial heterogeneity of lung nodules. Finally, to further address the variability of each view in the multi-view framework, a 3D MVSECNN model is constructed by introducing a SE module in the feature fusion stage. For training and testing purposes we used independent subsets of the public LIDC-IDRI dataset. RESULTS For the LIDC-IDRI dataset, this study achieved 96.04% accuracy and 98.59% sensitivity in the binary classification, and 87.76% accuracy in the ternary classification, which was higher than other state-of-the-art studies. The consistency score of 0.948 between the model predictions and pathological diagnosis was significantly higher than that between the clinician's annotations and pathological diagnosis. CONCLUSIONS The results show that our proposed method can effectively learn the spatial heterogeneity of nodules and solve the problem of multiple view variability. Moreover, the consistency analysis indicates that our method can provide clinicians with more accurate results of benign-malignant lung nodule classification for auxiliary diagnosis, which is important for assisting clinicians in clinical diagnosis. This article is protected by copyright. All rights reserved.

Journal ArticleDOI
Ingrid Niesman1
TL;DR: In this paper , a neural ODE model was proposed for visualizing deep neural network behavior during multi-parametric MRI-based glioma segmentation as a method to enhance deep learning explainability.
Abstract: Purpose To develop a neural ordinary differential equation (ODE) model for visualizing deep neural network behavior during multi-parametric MRI-based glioma segmentation as a method to enhance deep learning explainability. Methods By hypothesizing that deep feature extraction can be modeled as a spatiotemporally continuous process, we implemented a novel deep learning model, Neural ODE, in which deep feature extraction was governed by an ODE parameterized by a neural network. The dynamics of (1) MR images after interactions with the deep neural network and (2) segmentation formation can thus be visualized after solving the ODE. An accumulative contribution curve (ACC) was designed to quantitatively evaluate each MR image's utilization by the deep neural network toward the final segmentation results. The proposed Neural ODE model was demonstrated using 369 glioma patients with a 4-modality multi-parametric MRI protocol: T1, contrast-enhanced T1 (T1-Ce), T2, and FLAIR. Three Neural ODE models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The key MRI modalities with significant utilization by deep neural networks were identified based on ACC analysis. Segmentation results by deep neural networks using only the key MRI modalities were compared to those using all four MRI modalities in terms of Dice coefficient, accuracy, sensitivity, and specificity. Results All Neural ODE models successfully illustrated image dynamics as expected. ACC analysis identified T1-Ce as the only key modality in ET and TC segmentations, while both FLAIR and T2 were key modalities in WT segmentation. Compared to the U-Net results using all four MRI modalities, the Dice coefficient of ET (0.784→0.775), TC (0.760→0.758), and WT (0.841→0.837) using the key modalities only had minimal differences without significance. Accuracy, sensitivity, and specificity results demonstrated the same patterns. Conclusion The Neural ODE model offers a new tool for optimizing the deep learning model inputs with enhanced explainability. The presented methodology can be generalized to other medical image-related deep-learning applications.


Journal ArticleDOI
TL;DR: In this paper , the effect of the presence of the two second-generation TOF-PET insert detectors on parameters that affect MR image quality and evaluated the PET detector performance under different MRI pulse sequence conditions.
Abstract: BACKGROUND Simultaneous positron emission tomography/magnetic resonance imaging (PET/MRI) has shown promise in acquiring complementary multiparametric information of disease. However, designing these hybrid imaging systems is challenging due to the propensity for mutual interference between the PET and MRI sub-systems. Currently, there are integrated PET/MRI systems for clinical applications. For neurologic imaging, a brain-dedicated PET insert provides superior spatial resolution and sensitivity compared to body PET scanners. PURPOSE Our first-generation prototype brain PET insert ("PETcoil") demonstrated RF-penetrability and MR-compatibility. In the second-generation PETcoil system, all analog silicon photomultiplier (SiPM) signal digitization is moved inside the detectors, which results in substantially better PET detector performance, but presents a greater technical challenge for achieving MR-compatibility. In this paper, we report results from MR-compatibility studies of two fully assembled second-generation PET insert detector modules. METHODS We studied the effect of the presence of the two second-generation TOF-PET insert detectors on parameters that affect MR image quality and evaluated TOF-PET detector performance under different MRI pulse sequence conditions. RESULTS With the presence of operating PET detectors, no RF noise peaks were induced in the MR images, but the relative average noise level was increased by 15%, which led to a 3.1 dB to 4.2 dB degradation in MR image signal-to-noise ratio (SNR). The relative homogeneity of MR images degraded by less than 1.5% with the two operating TOF-PET detectors present. The reported results also indicated that ghosting artifacts (percent signal ghosting (PSG) ⩽ 1%) and MR susceptibility artifacts (0.044 ppm) were insignificant. The PET detector data showed a relative change of less than 5% in detector module performance between running outside and within the MR bore under different MRI pulse sequences except for energy resolution in EPI sequence (13% relative difference). CONCLUSIONS The PET detector operation did not cause any significant artifacts in MR images and the performance and TOF capability of the former were preserved under different tested MR conditions.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a method, namely HCEs-Net, for classification of five HCE subtypes using ultrasound images, which takes first the snap-shot strategy to obtain sub-models from the pre-trained VGG19, ResNet18, ViT-Base, and ConvNeXt-T models, and then a stacking process to ensemble those submodels.
Abstract: BACKGROUND Hepatic cystic echinococcosis (HCE) still has a high misdiagnosis rate, and misdiagnosis may lead to wrong treatments seriously harmful for the patients. Precise diagnosis of HCE relies heavily on the experience of clinical experts with auxiliary diagnostic tools using medical images. PURPOSE This paper intends to improve the diagnostic accuracy for HCE by employing a method which combines deep learning with ensemble method. METHODS We proposed a method, namely HCEs-Net, for classification of five HCE subtypes using ultrasound images. It takes first the snap-shot strategy to obtain sub-models from the pre-trained VGG19, ResNet18, ViT-Base, and ConvNeXt-T models, then a stacking process to ensemble those sub-models. Afterwards, it uses the tree-structured Pazren estimator (TPE) to optimize the hyperparameters. The experiments were evaluated by the five-fold cross-validation process. RESULTS A total of 3083 abdominal ultrasound images from 972 patients covering five subtypes of HCE were utilized in this study. The experiments were conducted to predict the HCE subtype, and results of modeling performance evaluation were reported in terms of precision, recall, F1-score, and AUC. The stacking model based on three ConvNeXt-T sub-models showed the best performance, with precision 85.9%, recall 85.5%, F1-score 85.7%, and AUC 0.971 which are higher than the compared state-of-the-art models. CONCLUSION The stacking model of three ConvNeXt-T sub-models shows comparable or superior performance to the other methods, including VGG19, ResNet18 and ViT-Base. It has the potential to enhance clinical diagnosis for HCE.

Journal ArticleDOI
TL;DR: In this paper , Monte Carlo simulations were used to calculate nucleus and cytoplasm Dose Enhancement Factors (n,cDEFs), considering a broad parameter space including GNP concentration, GNP intracellular distribution, cell size, and incident photon energy.
Abstract: BACKGROUND The introduction of Gold NanoParticles (GNPs) in radiotherapy treatments necessitates considerations such as GNP size, location, and quantity, as well as patient geometry and beam quality. Physics considerations span length scales across many orders of magnitude (nanometer-to-centimeter), presenting challenges that often limit the scope of dosimetric studies to either micro- or macroscopic scales. PURPOSE To investigate GNP dose-enhanced radiation Therapy (GNPT) through Monte Carlo (MC) simulations that bridge micro-to-macroscopic scales. The work is presented in two parts, with Part I (this work) investigating accurate and efficient MC modeling at the single cell level to calculate nucleus and cytoplasm Dose Enhancement Factors (n,cDEFs), considering a broad parameter space including GNP concentration, GNP intracellular distribution, cell size, and incident photon energy. Part II then evaluates cell dose enhancement factors across macroscopic (tumor) length scales. METHODS Different methods of modeling gold within cells are compared, from a contiguous volume of either pure gold or gold-tissue mixture to discrete GNPs in a hexagonal close-packed lattice. MC simulations with EGSnrc are performed to calculate n,cDEF for a cell with radius r cell = 7.35 $r_{\rm cell}=7.35$ µm and nucleus r nuc = 5 $r_{\rm nuc} = 5$ µm considering 10 to 370 keV incident photons, gold concentrations from 4 to 24 mgAu /gtissue , and three different GNP configurations within the cell: GNPs distributed around the surface of the nucleus (perinuclear) or GNPs packed into one (or four) endosome(s). Select simulations are extended to cells with different cell (and nucleus) sizes: 5 µm (2, 3, and 4 µm), 7.35 µm (4 and 6 µm), and 10 µm (7, 8, and 9 µm). RESULTS n,cDEFs are sensitive to the method of modeling gold in the cell, with differences of up to 17% observed; the hexagonal lattice of GNPs is chosen (as the most realistic model) for all subsequent simulations. Across cell/nucleus radii, source energies, and gold concentrations, both nDEF and cDEF are highest for GNPs in the perinuclear configuration, compared with GNPs in one (or four) endosome(s). Across all simulations of the (rcell , rnuc ) = (7.35, 5) µm cell, nDEFs and cDEFs range from unity to 6.83 and 3.87, respectively. Including different cell sizes, nDEFs and cDEFs as high as 21.5 and 5.5, respectively, are observed. Both nDEF and cDEF are maximized at photon energies above the K- or L-edges of gold by 10 to 20 keV. CONCLUSIONS Considering 5000 unique simulation scenarios, this work comprehensively investigates many physics trends on DEFs at the cellular level, including demonstrating that cellular DEFs are sensitive to gold modeling approach, intracellular GNP configuration, cell/nucleus size, gold concentration, and incident source energy. These data should prove especially useful in research as well as treatment planning, allowing one to optimize or estimate DEF using not only GNP uptake, but also account for average tumor cell size, incident photon energy, and intracellular configuration of GNPs. Part II will expand the investigation, taking the Part I cell model and applying it in cm-scale phantoms.