scispace - formally typeset
Search or ask a question

Showing papers in "Medical Physics in 2015"


Journal ArticleDOI
TL;DR: This article reviews The Physics of Radiation Therapy by Faiz M. Khan, John P. Gibbons, Fifth Edition, Baltimore and Philadelphia, 2014.
Abstract: This article reviews The Physics of Radiation Therapy by Faiz M. Khan, John P. Gibbons , Fifth Edition. and , , Baltimore and Philadelphia, 2014. 624 pp. Price: $190.39. ISBN: 9781451182453 (hardcover).

516 citations


Journal ArticleDOI
TL;DR: Two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the IBEX workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between institutions.
Abstract: Purpose: Radiomics, which is the high-throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (ibex), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions. Methods: The ibex software package was developed using the matlab and c/c++ programming languages. The software architecture deploys the modern model-view-controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, ibex is self-contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, ibex provides an integrated development environment on top of matlab and c/c++, so users are not limited to its built-in functions. In the ibex developer studio, users can plug in, debug, and test new algorithms, extending ibex’s functionality. ibex also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm-related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the ibex workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between institutions. Results: Researchers with a variety of technical skill levels, including radiation oncologists, physicists, and computer scientists, have found the ibex software to be intuitive, powerful, and easy to use. ibex can be run at any computer with the windows operating system and 1GB RAM. The authors fully validated the implementation of all importers, preprocessing algorithms, and feature extraction algorithms. Windows version 1.0 beta of stand-alone ibex and ibex’s source code can be downloaded. Conclusions: The authors successfully implemented ibex, an open infrastructure software platform that streamlines common radiomics workflow tasks. Its transparency, flexibility, and portability can greatly accelerate the pace of radiomics research and pave the way toward successful clinical translation.

264 citations


Journal ArticleDOI
TL;DR: In this paper, an artificial neural network (ANN) was trained to predict a dose matrix based on patient-specific geometric and planning parameters, such as the closest distance (r) to planning target volume (PTV) and organ-at-risks (OARs).
Abstract: Purpose: To demonstrate knowledge-based 3D dose prediction for external beam radiotherapy. Methods: Using previously treated plans as training data, an artificial neural network (ANN) was trained to predict a dose matrix based on patient-specific geometric and planning parameters, such as the closest distance (r) to planning target volume (PTV) and organ-at-risks (OARs). Twenty-three prostate and 43 stereotactic radiosurgery/radiotherapy (SRS/SRT) cases with at least one nearby OAR were studied. All were planned with volumetric-modulated arc therapy to prescription doses of 81 Gy for prostate and 12–30 Gy for SRS. Using these clinically approved plans, ANNs were trained to predict dose matrix and the predictive accuracy was evaluated using the dose difference between the clinical plan and prediction, δD = Dclin − Dpred. The mean (〈δDr〉), standard deviation (σδDr), and their interquartile range (IQR) for the training plans were evaluated at a 2–3 mm interval from the PTV boundary (rPTV) to assess prediction bias and precision. Initially, unfiltered models which were trained using all plans in the cohorts were created for each treatment site. The models predict approximately the average quality of OAR sparing. Emphasizing a subset of plans that exhibited superior to the average OAR sparing during training, refined models were created to predict high-quality rectum sparing for prostate and brainstem sparing for SRS. Using the refined model, potentially suboptimal plans were identified where the model predicted further sparing of the OARs was achievable. Replans were performed to test if the OAR sparing could be improved as predicted by the model. Results: The refined models demonstrated highly accurate dose distribution prediction. For prostate cases, the average prediction bias for all voxels irrespective of organ delineation ranged from −1% to 0% with maximum IQR of 3% over rPTV ∈ [− 6, 30] mm. The average prediction error was less than 10% for the same rPTV range. For SRS cases, the average prediction bias ranged from −0.7% to 1.5% with maximum IQR of 5% over rPTV ∈ [− 4, 32] mm. The average prediction error was less than 8%. Four potentially suboptimal plans were identified for each site and subsequent replanning demonstrated improved sparing of rectum and brainstem. Conclusions: The study demonstrates highly accurate knowledge-based 3D dose predictions for radiotherapy plans.

193 citations


Journal ArticleDOI
TL;DR: Comparing DECT with future MECT, the authors found noticeable material image quality improvements for an ideal photon counting detector; however, a realistic detector model with multiple energy bins predicts a performance on the level of dual source DECT at 100 kV/Sn 140 kV.
Abstract: Purpose: To study the performance of different dual energy computed tomography (DECT) techniques, which are available today, and future multi energy CT (MECT) employing novel photon counting detectors in an image-based material decomposition task. Methods: The material decomposition performance of different energy-resolved CT acquisition techniques is assessed and compared in a simulation study of virtual non-contrast imaging and iodine quantification. The material-specific images are obtained via a statistically optimal image-based material decomposition. A projection-based maximum likelihood approach was used for comparison with the authors’ image-based method. The different dedicated dual energy CT techniques are simulated employing realistic noise models and x-ray spectra. The authors compare dual source DECT with fast kV switching DECT and the dual layer sandwich detector DECT approach. Subsequent scanning and a subtraction method are studied as well. Further, the authors benchmark future MECT with novel photon counting detectors in a dedicated DECT application against the performance of today’s DECT using a realistic model. Additionally, possible dual source concepts employing photon counting detectors are studied. Results: The DECT comparison study shows that dual source DECT has the best performance, followed by the fast kV switching technique and the sandwich detector approach. Comparing DECT with future MECT, the authors found noticeable material image quality improvements for an ideal photon counting detector; however, a realistic detector model with multiple energy bins predicts a performance on the level of dual source DECT at 100 kV/Sn 140 kV. Employing photon counting detectors in dual source concepts can improve the performance again above the level of a single realistic photon counting detector and also above the level of dual source DECT. Conclusions: Substantial differences in the performance of today’s DECT approaches were found for the application of virtual non-contrast and iodine imaging. Future MECT with realistic photon counting detectors currently can only perform comparably to dual source DECT at 100 kV/Sn 140 kV. Dual source concepts with photon counting detectors could be a solution to this problem, promising a better performance.

184 citations


Journal ArticleDOI
TL;DR: Some radiomics features are robust to the noise and poor image quality of CBCT images when the imaging protocol is consistent, relative changes in the features are used, and patients are limited to those with less than 1 cm of motion.
Abstract: Purpose: Increasing evidence suggests radiomics features extracted from computed tomography (CT) images may be useful in prognostic models for patients with nonsmall cell lung cancer (NSCLC). This study was designed to determine whether such features can be reproducibly obtained from cone-beam CT (CBCT) images taken using medical Linac onboard-imaging systems in order to track them through treatment. Methods: Test-retest CBCT images of ten patients previously enrolled in a clinical trial were retrospectively obtained and used to determine the concordance correlation coefficient (CCC) for 68 different texture features. The volume dependence of each feature was also measured using the Spearman rank correlation coefficient. Features with a high reproducibility (CCC > 0.9) that were not due to volume dependence in the patient test-retest set were further examined for their sensitivity to differences in imaging protocol, level of scatter, and amount of motion by using two phantoms. The first phantom was a texture phantom composed of rectangular cartridges to represent different textures. Features were measured from two cartridges, shredded rubber and dense cork, in this study. The texture phantom was scanned with 19 different CBCT imagers to establish the features’ interscanner variability. The effect of scatter on these features was studied by surrounding the same texture phantom with scattering material (rice and solid water). The effect of respiratory motion on these features was studied using a dynamic-motion thoracic phantom and a specially designed tumor texture insert of the shredded rubber material. The differences between scans acquired with different Linacs and protocols, varying amounts of scatter, and with different levels of motion were compared to the mean intrapatient difference from the test-retest image set. Results: Of the original 68 features, 37 had a CCC >0.9 that was not due to volume dependence. When the Linac manufacturer and imaging protocol were kept consistent, 4–13 of these 37 features passed our criteria for reproducibility more than 50% of the time, depending on the manufacturer-protocol combination. Almost all of the features changed substantially when scatter material was added around the phantom. For the dense cork, 23 features passed in the thoracic scans and 11 features passed in the head scans when the differences between one and two layers of scatter were compared. Using the same test for the shredded rubber, five features passed the thoracic scans and eight features passed the head scans. Motion substantially impacted the reproducibility of the features. With 4 mm of motion, 12 features from the entire volume and 14 features from the center slice measurements were reproducible. With 6–8 mm of motion, three features (Laplacian of Gaussian filtered kurtosis, gray-level nonuniformity, and entropy), from the entire volume and seven features (coarseness, high gray-level run emphasis, gray-level nonuniformity, sum-average, information measure correlation, scaled mean, and entropy) from the center-slice measurements were considered reproducible. Conclusions: Some radiomics features are robust to the noise and poor image quality of CBCT images when the imaging protocol is consistent, relative changes in the features are used, and patients are limited to those with less than 1 cm of motion.

144 citations


Journal ArticleDOI
TL;DR: The authors conclude that the proposed dedicated CAD system for large pulmonary nodules can identify the vast majority of highly suspicious lesions in thoracic CT scans with a small number of false positives.
Abstract: Purpose: Current computer-aided detection (CAD) systems for pulmonary nodules in computed tomography (CT) scans have a good performance for relatively small nodules, but often fail to detect the much rarer larger nodules, which are more likely to be cancerous. We present a novel CAD system specifically designed to detect solid nodules larger than 10 mm. Methods: The proposed detection pipeline is initiated by a three-dimensional lung segmentation algorithm optimized to include large nodules attached to the pleural wall via morphological processing. An additional preprocessing is used to mask out structures outside the pleural space to ensure that pleural and parenchymal nodules have a similar appearance. Next, nodule candidates are obtained via a multistage process of thresholding and morphological operations, to detect both larger and smaller candidates. After segmenting each candidate, a set of 24 features based on intensity, shape, blobness, and spatial context are computed. A radial basis support vector machine (SVM) classifier was used to classify nodule candidates, and performance was evaluated using ten-fold cross-validation on the full publicly available lung image database consortium database. Results: The proposed CAD system reaches a sensitivity of 98.3% (234/238) and 94.1% (224/238) large nodules at an average of 4.0 and 1.0 false positives/scan, respectively. Conclusions: The authors conclude that the proposed dedicated CAD system for large pulmonary nodules can identify the vast majority of highly suspicious lesions in thoracic CT scans with a small number of false positives.

138 citations


Journal ArticleDOI
TL;DR: It is shown that a patch-based method could generate an accurate pCT based on conventional T1-weighted MRI sequences and without deformable registrations and showed a promising potential for RT of the brain based only on MRI.
Abstract: Purpose: In radiotherapy (RT) based on magnetic resonance imaging (MRI) as the only modality, the information on electron density must be derived from the MRI scan by creating a so-called pseudo computed tomography (pCT). This is a nontrivial task, since the voxel-intensities in an MRI scan are not uniquely related to electron density. To solve the task, voxel-based or atlas-based models have typically been used. The voxel-based models require a specialized dual ultrashort echo time MRI sequence for bone visualization and the atlas-based models require deformable registrations of conventional MRI scans. In this study, we investigate the potential of a patch-based method for creating a pCT based on conventional T 1-weighted MRI scans without using deformable registrations. We compare this method against two state-of-the-art methods within the voxel-based and atlas-based categories. Methods: The data consisted of CT and MRI scans of five cranial RT patients. To compare the performance of the different methods, a nested cross validation was done to find optimal model parameters for all the methods. Voxel-wise and geometric evaluations of the pCTs were done. Furthermore, a radiologic evaluation based on water equivalent path lengths was carried out, comparing the upper hemisphere of the head in the pCT and the real CT. Finally, the dosimetric accuracy was tested and compared for a photon treatment plan. Results: The pCTs produced with the patch-based method had the best voxel-wise, geometric, and radiologic agreement with the real CT, closely followed by the atlas-based method. In terms of the dosimetric accuracy, the patch-based method had average deviations of less than 0.5% in measures related to target coverage. Conclusions: We showed that a patch-based method could generate an accurate pCT based on conventional T 1-weighted MRI sequences and without deformable registrations. In our evaluations, the method performed better than existing voxel-based and atlas-based methods and showed a promising potential for RT of the brain based only on MRI.

133 citations


Journal ArticleDOI
TL;DR: The results show that texture features are predictive of molecular subtypes and survival status in GBM, and indicate the feasibility of using tumor-derived imaging features to guide genomically informed interventions without the need for invasive biopsies.
Abstract: Purpose: Glioblastoma multiforme (GBM) is the most common and aggressive primary brain cancer. Four molecular subtypes of GBM have been described but can only be determined by an invasive brain biopsy. The goal of this study is to evaluate the utility of texture features extracted from magnetic resonance imaging (MRI) scans as a potential noninvasive method to characterize molecular subtypes of GBM and to predict 12-month overall survival status for GBM patients. Methods: The authors manually segmented the tumor regions from postcontrast T1 weighted and T2 fluid-attenuated inversion recovery (FLAIR) MRI scans of 82 patients with de novo GBM. For each patient, the authors extracted five sets of computer-extracted texture features, namely, 48 segmentation-based fractal texture analysis (SFTA) features, 576 histogram of oriented gradients (HOGs) features, 44 run-length matrix (RLM) features, 256 local binary patterns features, and 52 Haralick features, from the tumor slice corresponding to the maximum tumor area in axial, sagittal, and coronal planes, respectively. The authors used an ensemble classifier called random forest on each feature family to predict GBM molecular subtypes and 12-month survival status (a dichotomized version of overall survival at the 12-month time point indicating if the patient was alive or not at 12 months). The performance of the prediction was quantified and compared using receiver operating characteristic (ROC) curves. Results: With the appropriate combination of texture feature set, image plane (axial, coronal, or sagittal), and MRI sequence, the area under ROC curve values for predicting different molecular subtypes and 12-month survival status are 0.72 for classical (with Haralick features on T1 postcontrast axial scan), 0.70 for mesenchymal (with HOG features on T2 FLAIR axial scan), 0.75 for neural (with RLM features on T2 FLAIR axial scan), 0.82 for proneural (with SFTA features on T1 postcontrast coronal scan), and 0.69 for 12-month survival status (with SFTA features on T1 postcontrast coronal scan). Conclusions: The authors evaluated the performance of five types of texture features in predicting GBM molecular subtypes and 12-month survival status. The authors’ results show that texture features are predictive of molecular subtypes and survival status in GBM. These results indicate the feasibility of using tumor-derived imaging features to guide genomically informed interventions without the need for invasive biopsies.

130 citations


Journal ArticleDOI
TL;DR: The Statistical Decomposition Algorithm enables a highly accurate MRI only workflow in prostate radiotherapy planning and the dosimetric uncertainties originating from the SDA appear negligible and are notably lower than the uncertainties introduced by variations in patient geometry between imaging sessions.
Abstract: Purpose: In order to enable a magnetic resonance imaging (MRI) only workflow in radiotherapy treatment planning, methods are required for generating Hounsfield unit (HU) maps (i.e., synthetic compu ...

129 citations


Journal ArticleDOI
TL;DR: Ioacoustics is suggested as a technique for range verification in particle therapy at locations, where the tumor can be localized by ultrasound imaging and could offer the possibility of combining anatomical ultrasound and Bragg peak imaging, but further studies are required for translation to clinical application.
Abstract: Purpose: Range verification in ion beam therapy relies to date on nuclear imaging techniques which require complex and costly detector systems. A different approach is the detection of thermoacoustic signals that are generated due to localized energy loss of ion beams in tissue (ionoacoustics). Aim of this work was to study experimentally the achievable position resolution of ionoacoustics under idealized conditions using high frequency ultrasonic transducers and a specifically selected probing beam. Methods: A water phantom was irradiated by a pulsed 20 MeV proton beam with varying pulse intensity and length. The acoustic signal of single proton pulses was measured by different PZT-based ultrasound detectors (3.5 and 10 MHz central frequencies). The proton dose distribution in water was calculated by Geant4 and used as input for simulation of the generated acoustic wave by the matlab toolbox k-WAVE. Results: In measurements from this study, a clear signal of the Bragg peak was observed for an energy deposition as low as 1012 eV. The signal amplitude showed a linear increase with particle number per pulse and thus, dose. Bragg peak position measurements were reproducible within ±30 μm and agreed with Geant4 simulations to better than 100 μm. The ionoacoustic signal pattern allowed for a detailed analysis of the Bragg peak and could be well reproduced by k-WAVE simulations. Conclusions: The authors have studied the ionoacoustic signal of the Bragg peak in experiments using a 20 MeV proton beam with its correspondingly localized energy deposition, demonstrating submillimeter position resolution and providing a deep insight in the correlation between the acoustic signal and Bragg peak shape. These results, together with earlier experiments and new simulations (including the results in this study) at higher energies, suggest ionoacoustics as a technique for range verification in particle therapy at locations, where the tumor can be localized by ultrasound imaging. This acoustic range verification approach could offer the possibility of combining anatomical ultrasound and Bragg peak imaging, but further studies are required for translation of these findings to clinical application.

127 citations


Journal ArticleDOI
TL;DR: This work investigated deformable image registration (DIR) of the planning CT (pCT) to the CBCT to generate a virtual CT (v CT) to be used for proton dose recalculation and generated CBCT based stopping power distributions using DIR of the pCT to a CBCT scan.
Abstract: Purpose: Intensity modulated proton therapy (IMPT) of head and neck (H&N) cancer patients may be improved by plan adaptation. The decision to adapt the treatment plan based on a dose recalculation on the current anatomy requires a diagnostic quality computed tomography (CT) scan of the patient. As gantry-mounted cone beam CT (CBCT) scanners are currently being offered by vendors, they may offer daily or weekly updates of patient anatomy. CBCT image quality may not be sufficient for accurate proton dose calculation and it is likely necessary to perform CBCT CT number correction. In this work, the authors investigated deformable image registration (dir) of the planning CT (pCT) to the CBCT to generate a virtual CT (vCT) to be used for proton dose recalculation. Methods: Datasets of six H&N cancer patients undergoing photon intensity modulated radiation therapy were used in this study to validate the vCT approach. Each dataset contained a CBCT acquired within 3 days of a replanning CT (rpCT), in addition to a pCT. The pCT and rpCT were delineated by a physician. A Morphons algorithm was employed in this work to perform dir of the pCT to CBCT following a rigid registration of the two images. The contours from the pCT were deformed using the vector field resulting from dir to yield a contoured vCT. The dir accuracy was evaluated with a scale invariant feature transform (SIFT) algorithm comparing automatically identified matching features between vCT and CBCT. The rpCT was used as reference for evaluation of the vCT. The vCT and rpCT CT numbers were converted to stopping power ratio and the water equivalent thickness (WET) was calculated. IMPT dose distributions from treatment plans optimized on the pCT were recalculated with a Monte Carlo algorithm on the rpCT and vCT for comparison in terms of gamma index, dose volume histogram (DVH) statistics as well as proton range. The dir generated contours on the vCT were compared to physician-drawn contours on the rpCT. Results: The dir accuracy was better than 1.4 mm according to the SIFT evaluation. The mean WET differences between vCT (pCT) and rpCT were below 1 mm (2.6 mm). The amount of voxels passing 3%/3 mm gamma criteria were above 95% for the vCT vs rpCT. When using the rpCT contour set to derive DVH statistics from dose distributions calculated on the rpCT and vCT the differences, expressed in terms of 30 fractions of 2 Gy, were within [−4, 2 Gy] for parotid glands (D mean), spinal cord (D 2%), brainstem (D 2%), and CTV (D 95%). When using dir generated contours for the vCT, those differences ranged within [−8, 11 Gy]. Conclusions: In this work, the authors generated CBCT based stopping power distributions using dir of the pCT to a CBCT scan. dir accuracy was below 1.4 mm as evaluated by the SIFT algorithm. Dose distributions calculated on the vCT agreed well to those calculated on the rpCT when using gamma index evaluation as well as DVH statistics based on the same contours. The use of dir generated contours introduced variability in DVH statistics.

Journal ArticleDOI
TL;DR: After successful dosimetric beam commissioning, quality assurance measurements performed during a 24-month period show very stable beam characteristics, which are therefore suitable for performing safe and accurate patient treatments.
Abstract: Purpose: To describe the dosimetric commissioning and quality assurance (QA) of the actively scanned proton and carbon ion beams at the Italian National Center for Oncological Hadrontherapy. Methods: The laterally integrated depth-dose-distributions (IDDs) were acquired with the PTW Peakfinder, a variable depth water column, equipped with two Bragg peak ionization chambers. fluka Monte Carlo code was used to generate the energy libraries, the IDDs in water, and the fragment spectra for carbon beams. EBT3 films were used for spot size measurements, beam position over the scan field, and homogeneity in 2D-fields. Beam monitor calibration was performed in terms of number of particles per monitor unit using both a Farmer-type and an Advanced Markus ionization chamber. The beam position at the isocenter, beam monitor calibration curve, dose constancy in the center of the spread-out-Bragg-peak, dose homogeneity in 2D-fields, beam energy, spot size, and spot position over the scan field are all checked on a daily basis for both protons and carbon ions and on all beam lines. Results: The simulated IDDs showed an excellent agreement with the measured experimental curves. The measured full width at half maximum (FWHM) of the pencil beam in air at the isocenter was energy-dependent for both particle species: in particular, for protons, the spot size ranged from 0.7 to 2.2 cm. For carbon ions, two sets of spot size are available: FWHM ranged from 0.4 to 0.8 cm (for the smaller spot size) and from 0.8 to 1.1 cm (for the larger one). The spot position was accurate to within ±1 mm over the whole 20 × 20 cm2 scan field; homogeneity in a uniform squared field was within ±5% for both particle types at any energy. QA results exceeding tolerance levels were rarely found. In the reporting period, the machine downtime was around 6%, of which 4.5% was due to planned maintenance shutdowns. Conclusions: After successful dosimetric beam commissioning, quality assurance measurements performed during a 24-month period show very stable beam characteristics, which are therefore suitable for performing safe and accurate patient treatments.

Journal ArticleDOI
TL;DR: This paper reviews methods for production, sensitivity maximization, and task-based optimization of collimation for both clinical and preclinical imaging applications and discusses concepts like septal penetration, high-resolution applications, multiplexing, sampling completeness, and adaptive systems.
Abstract: In single photon emission computed tomography, the choice of the collimator has a major impact on the sensitivity and resolution of the system. Traditional parallel-hole and fan-beam collimators used in clinical practice, for example, have a relatively poor sensitivity and subcentimeter spatial resolution, while in small-animal imaging, pinhole collimators are used to obtain submillimeter resolution and multiple pinholes are often combined to increase sensitivity. This paper reviews methods for production, sensitivity maximization, and task-based optimization of collimation for both clinical and preclinical imaging applications. New opportunities for improved collimation are now arising primarily because of (i) new collimator-production techniques and (ii) detectors with improved intrinsic spatial resolution that have recently become available. These new technologies are expected to impact the design of collimators in the future. The authors also discuss concepts like septal penetration, high-resolution applications, multiplexing, sampling completeness, and adaptive systems, and the authors conclude with an example of an optimization study for a parallel-hole, fan-beam, cone-beam, and multiple-pinhole collimator for different applications.

Journal ArticleDOI
TL;DR: A particle tracking step based strategy to calculate the average LET quantities using geant 4 for different tracking step size limits and recommends the use of LETt in the dose plateau region and LETd around the Bragg peak for a large step limit, i.e., 500 μm, LETd is recommended along the whole Bragg curve.
Abstract: Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the geant 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from geant 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LET t and dose-averaged LET, LET d ) using geant 4 for different tracking step size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LET t and LET d of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LET t but significant for LET d . This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in geant 4 can result in incorrect LET d calculation results in the dose plateau region for small step limits. The erroneous LET d results can be attributed to the algorithm to determine fluctuations in energy deposition along the tracking step in geant 4. The incorrect LET d values lead to substantial differences in the calculated RBE. Conclusions: When the geant 4 particle tracking method is used to calculate the average LET values within targets with a small step limit, such as smaller than 500 μm, the authors recommend the use of LET t in the dose plateau region and LET d around the Bragg peak. For a large step limit, i.e., 500 μm, LET d is recommended along the whole Bragg curve. The transition point depends on beam parameters and can be found by determining the location where the gradient of the ratio of LET d and LET t becomes positive.

Journal ArticleDOI
TL;DR: Pseudo-monochromatic imaging is able to reduce beam hardening, scatter, and metal artifacts in some cases but it cannot remove them, and raw data-based dual energy decomposition methods should be preferred, in particular, because the CNR penalty is almost negligible.
Abstract: Purpose: Dual Energy CT (DECT) provides so-called monoenergetic images based on a linear combination of the original polychromatic images. At certain patient-specific energy levels, corresponding to certain patient- and slice-dependent linear combination weights, e.g., E = 160 keV corresponds to α = 1.57, a significant reduction of metal artifacts may be observed. The authors aimed at analyzing the method for its artifact reduction capabilities to identify its limitations. The results are compared with raw data-based processing. Methods: Clinical DECT uses a simplified version of monochromatic imaging by linearly combining the low and the high kV images and by assigning an energy to that linear combination. Those pseudo-monochromatic images can be used by radiologists to obtain images with reduced metal artifacts. The authors analyzed the underlying physics and carried out a series expansion of the polychromatic attenuation equations. The resulting nonlinear terms are responsible for the artifacts, but they are not linearly related between the low and the high kV scan: A linear combination of both images cannot eliminate the nonlinearities, it can only reduce their impact. Scattered radiation yields additional noncanceling nonlinearities. This method is compared to raw data-based artifact correction methods. To quantify the artifact reduction potential of pseudo-monochromatic images, they simulated the FORBILD abdomen phantom with metal implants, and they assessed patient data sets of a clinical dual source CT system (100, 140 kV Sn) containing artifacts induced by a highly concentrated contrast agent bolus and by metal. In each case, they manually selected an optimal α and compared it to a raw data-based material decomposition in case of simulation, to raw data-based material decomposition of inconsistent rays in case of the patient data set containing contrast agent, and to the frequency split normalized metal artifact reduction in case of the metal implant. For each case, the contrast-to-noise ratio (CNR) was assessed. Results: In the simulation, the pseudo-monochromatic images yielded acceptable artifact reduction results. However, the CNR in the artifact-reduced images was more than 60% lower than in the original polychromatic images. In contrast, the raw data-based material decomposition did not significantly reduce the CNR in the virtual monochromatic images. Regarding the patient data with beam hardening artifacts and with metal artifacts from small implants the pseudo-monochromatic method was able to reduce the artifacts, again with the downside of a significant CNR reduction. More intense metal artifacts, e.g., as those caused by an artificial hip joint, could not be suppressed. Conclusions: Pseudo-monochromatic imaging is able to reduce beam hardening, scatter, and metal artifacts in some cases but it cannot remove them. In all cases, the CNR is significantly reduced, thereby rendering the method questionable, unless special post processing algorithms are implemented to restore the high CNR from the original images (e.g., by using a frequency split technique). Raw data-based dual energy decomposition methods should be preferred, in particular, because the CNR penalty is almost negligible.

Journal ArticleDOI
TL;DR: M5L results do not deteriorate when increasing the dataset size, making it a candidate for supporting radiologists on large scale screenings and clinical programs, and the development of a dedicated module for GGOs detection could further improve it.
Abstract: Purpose: M5L, a fully automated computer-aided detection (CAD) system for the detection and segmentation of lung nodules in thoracic computed tomography (CT), is presented and validated on several image datasets. Methods: M5L is the combination of two independent subsystems, based on the Channeler Ant Model as a segmentation tool [lung channeler ant model (lungCAM)] and on the voxel-based neural approach. The lungCAM was upgraded with a scan equalization module and a new procedure to recover the nodules connected to other lung structures; its classification module, which makes use of a feed-forward neural network, is based of a small number of features (13), so as to minimize the risk of lacking generalization, which could be possible given the large difference between the size of the training and testing datasets, which contain 94 and 1019 CTs, respectively. The lungCAM (standalone) and M5L (combined) performance was extensively tested on 1043 CT scans from three independent datasets, including a detailed analysis of the full Lung Image Database Consortium/Image Database Resource Initiative database, which is not yet found in literature. Results: The lungCAM and M5L performance is consistent across the databases, with a sensitivity of about 70% and 80%, respectively, at eight false positive findings per scan, despite the variable annotation criteria and acquisition and reconstruction conditions. A reduced sensitivity is found for subtle nodules and ground glass opacities (GGO) structures. A comparison with other CAD systems is also presented. Conclusions: The M5L performance on a large and heterogeneous dataset is stable and satisfactory, although the development of a dedicated module for GGOs detection could further improve it, as well as an iterative optimization of the training procedure. The main aim of the present study was accomplished: M5L results do not deteriorate when increasing the dataset size, making it a candidate for supporting radiologists on large scale screenings and clinical programs.

Journal ArticleDOI
TL;DR: A comprehensive knowledge-based methodology for predicting achievable dose-volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans and to identify suboptimal plans is developed.
Abstract: Purpose: The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose–volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Methods: Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V 10Gy (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution’s VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QMclin − QMpred, and a coefficient of determination, R 2. For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. Results: The most accurate predictions are obtained when plans are stratified based on proximity to OARs and their PTV volume sizes. Volumes are categorized into small (V PTV < 2 cm3), medium (2 cm3 < V PTV < 25 cm3), and large (25 cm3 < V PTV). The unfiltered models demonstrate the ability to predict GMs to ∼1 mm and fractional brain V 10Gy to ∼25% for plans with large V PTV and critical OAR involvements. Increased accuracy and precision of QM predictions are obtained when high quality plans are selected for the model training. For the small and medium V PTV plans without critical OAR involvement, predictive ability was evaluated using the refined model. For training plans, the model predicted GM to an accuracy of 0.2 ± 0.3 mm and fractional brain V 10Gy to 0.04 ± 0.12, suggesting highly accurate predictive ability. For excluded plans, the average δGM was 1.1 mm and fractional brain V 10Gy was 0.20. These δQM are significantly greater than those of the model training plans (p 1.35 mm were identified as potentially suboptimal, and replanning these cases using predicted target objectives demonstrates significant improvements on QMs: on average, 1.1 mm reduction in GM (p < 0.001) and 23% reduction in brain V 10Gy (p < 0.001). After replanning, the difference of δGM distribution between the 20 replans and the refined model training plans was marginal. Conclusions: The results demonstrate the ability to predict SRS QMs precisely and to identify suboptimal plans. Furthermore, the knowledge-based DVH predictions were directly used as target optimization objectives and allowed a standardized planning process that bettered the clinically approved plans. Full clinical application of this methodology can improve consistency of SRS plan quality in a wide range of PTV volume and proximity to OARs and facilitate automated treatment planning for this critical treatment site.

Journal ArticleDOI
TL;DR: An evaluation of the research efforts carried out in the invention, development, and improvement of BCT with dedicated scanners with state-of-the-art technology, including initial steps toward commercialization, after more than a decade of R&D in the laboratory and/or in the clinic is presented.
Abstract: X-ray mammography of the compressed breast is well recognized as the "gold standard" for early detection of breast cancer, but its performance is not ideal. One limitation of screening mammography is tissue superposition, particularly for dense breasts. Since 2001, several research groups in the USA and in the European Union have developed computed tomography (CT) systems with digital detector technology dedicated to x-ray imaging of the uncompressed breast (breast CT or BCT) for breast cancer screening and diagnosis. This CT technology--tracing back to initial studies in the 1970s--allows some of the limitations of mammography to be overcome, keeping the levels of radiation dose to the radiosensitive breast glandular tissue similar to that of two-view mammography for the same breast size and composition. This paper presents an evaluation of the research efforts carried out in the invention, development, and improvement of BCT with dedicated scanners with state-of-the-art technology, including initial steps toward commercialization, after more than a decade of R&D in the laboratory and/or in the clinic. The intended focus here is on the technological/engineering aspects of BCT and on outlining advantages and limitations as reported in the related literature. Prospects for future research in this field are discussed.

Journal ArticleDOI
TL;DR: An improved energy-loss model for the excitation and ionization of liquid water by low-energy electrons by using the Emfietzoglou model dielectric response function used in the geant4-DNA existing model.
Abstract: Purpose: The GEANT4-DNA physics models are upgraded by a more accurate set of electron cross sections for ionization and excitation in liquid water. The impact of the new developments on low-energy electron transport simulations by the GEANT4 Monte Carlo toolkit is examined for improving its performance in dosimetry applications at the subcellular and nanometer level. Methods: The authors provide an algorithm for an improved implementation of the Emfietzoglou model dielectric response function of liquid water used in the GEANT4-DNA existing model. The algorithm redistributes the imaginary part of the dielectric function to ensure a physically motivated behavior at the binding energies, while retaining all the advantages of the original formulation, e.g., the analytic properties and the fulfillment of the f-sum-rule. In addition, refinements in the exchange and perturbation corrections to the Born approximation used in the GEANT4-DNA existing model are also made. Results: The new ionization and excitation cross sections are significantly different from those of the GEANT4-DNA existing model. In particular, excitations are strongly enhanced relative to ionizations, resulting in higher W-values and less diffusive dose-point-kernels at sub-keV electron energies. Conclusions: An improved energy-loss model for the excitation and ionization of liquid water by low-energy electrons has been implementedmore » in GEANT4-DNA. The suspiciously low W-values and the unphysical long tail in the dose-point-kernel have been corrected owing to a different partitioning of the dielectric function.« less

Journal ArticleDOI
TL;DR: The proposed lattice-based strategy for mammographic texture analysis enables to characterize the parenchymal pattern over the entire breast and provide richer information compared to currently used descriptors and may ultimately improve breast cancer risk assessment.
Abstract: Purpose: Mammographic percent density (PD%) is known to be a strong risk factor for breast cancer. Recent studies also suggest that parenchymal texture features, which are more granular descriptors of the parenchymal pattern, can provide additional information about breast cancer risk. To date, most studies have measured mammographic texture within selected regions of interest (ROIs) in the breast, which cannot adequately capture the complexity of the parenchymal pattern throughout the whole breast. To better characterize patterns of the parenchymal tissue, the authors have developed a fully automated software pipeline based on a novel lattice-based strategy to extract a range of parenchymal texture features from the entire breast region. Methods: Digital mammograms from 106 cases with 318 age-matched controls were retrospectively analyzed. The lattice-based approach is based on a regular grid virtually overlaid on each mammographic image. Texture features are computed from the intersection (i.e., lattice) points of the grid lines within the breast, using a local window centered at each lattice point. Using this strategy, a range of statistical (gray-level histogram, co-occurrence, and run-length) and structural (edge-enhancing, local binary pattern, and fractal dimension) features are extracted. To cover the entire breast, the size of the local window for feature extraction is set equal to the lattice grid spacing and optimized experimentally by evaluating different windows sizes. The association between their lattice-based texture features and breast cancer was evaluated using logistic regression with leave-one-out cross validation and further compared to that of breast PD% and commonly used single-ROI texture features extracted from the retroareolar or the central breast region. Classification performance was evaluated using the area under the curve (AUC) of the receiver operating characteristic (ROC). DeLong’s test was used to compare the different ROCs in terms of AUC performance. Results: The average univariate performance of the lattice-based features is higher when extracted from smaller than larger window sizes. While not every individual texture feature is superior to breast PD% (AUC: 0.59, STD: 0.03), their combination in multivariate analysis has significantly better performance (AUC: 0.85, STD: 0.02, p 0.05). Conclusions: The proposed lattice-based strategy for mammographic texture analysis enables to characterize the parenchymal pattern over the entire breast. As such, these features provide richer information compared to currently used descriptors and may ultimately improve breast cancer risk assessment. Larger studies are warranted to validate these findings and also compare to standard demographic and reproductive risk factors.

Journal ArticleDOI
TL;DR: This GPU-based proton transport MC is the first of its kind to include a detailed nuclear model to handle nonelastic interactions of protons with any nucleus and is being integrated into a framework to perform fast routine clinical QA of pencil-beam based treatment plans.
Abstract: Purpose: Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on graphics processing units (GPUs). However, these MCs usually use simplified models for nonelastic proton–nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and nonelastic proton–nucleus collisions. Methods: Using the cuda framework, the authors implemented GPU kernels for the following tasks: (1) simulation of beam spots from our possible scanning nozzle configurations, (2) proton propagation through CT geometry, taking into account nuclear elastic scattering, multiple scattering, and energy loss straggling, (3) modeling of the intranuclear cascade stage of nonelastic interactions when they occur, (4) simulation of nuclear evaporation, and (5) statistical error estimates on the dose. To validate our MC, the authors performed (1) secondary particle yield calculations in proton collisions with therapeutically relevant nuclei, (2) dose calculations in homogeneous phantoms, (3) recalculations of complex head and neck treatment plans from a commercially available treatment planning system, and compared with geant 4.9.6p2/TOPAS. Results: Yields, energy, and angular distributions of secondaries from nonelastic collisions on various nuclei are in good agreement with the geant 4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%-2 mm for treatment plan simulations is typically 98%. The net computational time on a NVIDIA GTX680 card, including all CPU–GPU data transfers, is ∼20 s for 1 × 107 proton histories. Conclusions: Our GPU-based MC is the first of its kind to include a detailed nuclear model to handle nonelastic interactions of protons with any nucleus. Dosimetric calculations are in very good agreement with geant 4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil-beam based treatment plans, and is being used as the dose calculation engine in a clinically applicable MC-based IMPT treatment planning system. The detailed nuclear modeling will allow us to perform very fast linear energy transfer and neutron dose estimates on the GPU.

Journal ArticleDOI
TL;DR: It is demonstrated that the failure to meet classical cavity theory requirements, such as CPE, is not the reason for significant quality correction factors and that what matters most, apart from volume averaging effects, is the relationship between the lack of CPE in the small field itself and the density of the detector cavity.
Abstract: Purpose: To explain the reasons for significant quality correction factors in megavoltage small photon fields and clarify the underlying concepts relevant to dosimetry under such conditions. Methods: The validity of cavity theory and the requirement of charged particle equilibrium (CPE) are addressed from a theoretical point of view in the context of nonstandard beams. Perturbation effects are described into four main subeffects, explaining their nature and pointing out their relative importance in small photon fields. Results: It is demonstrated that the failure to meet classical cavity theory requirements, such as CPE, is not the reason for significant quality correction factors. On the contrary, it is shown that the lack of CPE alone cannot explain these corrections and that what matters most, apart from volume averaging effects, is the relationship between the lack of CPE in the small field itself and the density of the detector cavity. The density perturbation effect is explained based on Fano’s theorem, describing the compensating effect of two main contributions to cavity absorbed dose. Using the same approach, perturbation effects arising from the difference in atomic properties of the cavity medium and the presence of extracameral components are explained. Volume averaging effects are also discussed in detail. Conclusions: Quality correction factors of small megavoltage photon fields are mainly due to differences in electron density between water and the detector medium and to volume averaging over the detector cavity. Other effects, such as the presence of extracameral components and differences in atomic properties of the detection medium with respect to water, can also play an accentuated role in small photon fields compared to standard beams.

Journal ArticleDOI
TL;DR: The development of a new suite of physical breast phantoms based on human data that offer realistic breast anatomy, patient variability, and ease of use and are a potential candidate for performing both system quality control testing and virtual clinical trials.
Abstract: Purpose: Physical phantoms are essential for the development, optimization, and evaluation of x-ray breast imaging systems. Recognizing the major effect of anatomy on image quality and clinical performance, such phantoms should ideally reflect the three-dimensional structure of the human breast. Currently, there is no commercially available three-dimensional physical breast phantom that is anthropomorphic. The authors present the development of a new suite of physical breast phantoms based on human data. Methods: The phantoms were designed to match the extended cardiac-torso virtual breast phantoms that were based on dedicated breast computed tomography images of human subjects. The phantoms were fabricated by high-resolution multimaterial additive manufacturing (3D printing) technology. The glandular equivalency of the photopolymer materials was measured relative to breast tissue-equivalent plastic materials. Based on the current state-of-the-art in the technology and available materials, two variations were fabricated. The first was a dual-material phantom, the Doublet. Fibroglandular tissue and skin were represented by the most radiographically dense material available; adipose tissue was represented by the least radiographically dense material. The second variation, the Singlet, was fabricated with a single material to represent fibroglandular tissue and skin. It was subsequently filled with adipose-equivalent materials including oil, beeswax, and permanent urethane-based polymer. Simulated microcalcification clusters were further included in the phantoms via crushed eggshells. The phantoms were imaged and characterized visually and quantitatively. Results: The mammographic projections and tomosynthesis reconstructed images of the fabricated phantoms yielded realistic breast background. The mammograms of the phantoms demonstrated close correlation with simulated mammographic projection images of the corresponding virtual phantoms. Furthermore, power-law descriptions of the phantom images were in general agreement with real human images. The Singlet approach offered more realistic contrast as compared to the Doublet approach, but at the expense of air bubbles and air pockets that formed during the filling process. Conclusions: The presented physical breast phantoms and their matching virtual breast phantoms offer realistic breast anatomy, patient variability, and ease of use, making them a potential candidate for performing both system quality control testing and virtual clinical trials.

Journal ArticleDOI
TL;DR: Image quality increased with increasing dose and decreasing phantom size and the detectability exhibited less variability with phantom size for modulated scans compared to fixed tube current scans, indicating the ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality with changing phantom size.
Abstract: Purpose: The purpose of this work was to assess the inherent image quality characteristics of a new multidetector computed tomography system in terms of noise, resolution, and detectability index as a function of image acquisition and reconstruction for a range of clinically relevant settings. Methods: A multisized image quality phantom (37, 30, 23, 18.5, and 12 cm physical diameter) was imaged on a SOMATOM Force scanner (Siemens Medical Solutions) under variable dose, kVp, and tube current modulation settings. Images were reconstructed with filtered back projection (FBP) and with advanced modeled iterative reconstruction (ADMIRE) with iterative strengths of 3, 4, and 5. Image quality was assessed in terms of the noise power spectrum (NPS), task transfer function (TTF), and detectability index for a range of detection tasks (contrasts of approximately 45, 90, 300, −900, and 1000 HU, and 2–20 mm diameter) based on a non-prewhitening matched filter model observer with eye filter. Results: Image noise magnitude decreased with decreasing phantom size, increasing dose, and increasing ADMIRE strength, offering up to 64% noise reduction relative to FBP. Noise texture in terms of the NPS was similar between FBP and ADMIRE (<5% shift in peak frequency). The resolution, based on the TTF, improved with increased ADMIRE strength by an average of 15% in the TTF 50% frequency for ADMIRE-5. The detectability index increased with increasing dose and ADMIRE strength by an average of 55%, 90%, and 163% for ADMIRE 3, 4, and 5, respectively. Assessing the impact of mA modulation for a fixed average dose over the length of the phantom, detectability was up to 49% lower in smaller phantom sections and up to 26% higher in larger phantom sections for the modulated scan compared to a fixed tube current scan. Overall, the detectability exhibited less variability with phantom size for modulated scans compared to fixed tube current scans. Conclusions: Image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose. The use of tube current modulation resulted in more consistent image quality with changing phantom size.

Journal ArticleDOI
TL;DR: IMAR correction algorithm could be readily implemented in an existing clinical workflow upon commercial release and present with better overall conspicuity of the patient/phantom geometry and offer more accurate CT numbers for improved local dosimetry.
Abstract: Purpose: To clinically evaluate an iterative metal artifact reduction (IMAR) algorithm prototype in the radiation oncology clinic setting by testing for accuracy in CT number retrieval, relative dosimetric changes in regions affected by artifacts, and improvements in anatomical and shape conspicuity of corrected images. Methods: A phantom with known material inserts was scanned in the presence/absence of metal with different configurations of placement and sizes. The relative change in CT numbers from the reference data (CT with no metal) was analyzed. The CT studies were also used for dosimetric tests where dose distributions from both photon and proton beams were calculated. Dose differences and gamma analysis were calculated to quantify the relative changes between doses calculated on the different CT studies. Data from eight patients (all different treatment sites) were also used to quantify the differences between dose distributions before and after correction with IMAR, with no reference standard. A ranking experiment was also conducted to analyze the relative confidence of physicians delineating anatomy in the near vicinity of the metal implants. Results: IMAR corrected images proved to accurately retrieve CT numbers in the phantom study, independent of metal insert configuration, size of the metal, and acquisition energy. For plastic water, the mean difference between corrected images and reference images was −1.3 HU across all scenarios (N = 37) with a 90% confidence interval of [−2.4, −0.2] HU. While deviations were relatively higher in images with more metal content, IMAR was able to effectively correct the CT numbers independent of the quantity of metal. Residual errors in the CT numbers as well as some induced by the correction algorithm were found in the IMAR corrected images. However, the dose distributions calculated on IMAR corrected images were closer to the reference data in phantom studies. Relative spatial difference in the dose distributions in the regions affected by the metal artifacts was also observed in patient data. However, in absence of a reference ground truth (CT set without metal inserts), these differences should not be interpreted as improvement/deterioration of the accuracy of calculated dose. With limited data presented, it was observed that proton dosimetry was affected more than photons as expected. Physicians were significantly more confident contouring anatomy in the regions affected by artifacts. While site specific preferences were detected, all indicated that they would consistently use IMAR corrected images. Conclusions: IMAR correction algorithm could be readily implemented in an existing clinical workflow upon commercial release. While residual errors still exist in IMAR corrected images, these images present with better overall conspicuity of the patient/phantom geometry and offer more accurate CT numbers for improved local dosimetry. The variety of different scenarios included herein attest to the utility of the evaluated IMAR for a wide range of radiotherapy clinical scenarios.

Journal ArticleDOI
TL;DR: The novel CAD system is able to detect the discriminative texture features for cancer detection and localization and is a promising tool for improving the quality and efficiency of prostate cancer diagnosis.
Abstract: Purpose: The authors propose a computer-aided diagnosis (CAD) system for prostate cancer to aid in improving the accuracy, reproducibility, and standardization of multiparametric magnetic resonance imaging (MRI). Methods: The proposed system utilizes two MRI sequences [T2-weighted MRI and high-b-value (b = 2000 s/mm2) diffusion-weighted imaging (DWI)] and texture features based on local binary patterns. A three-stage feature selection method is employed to provide the most discriminative features. The authors included a total of 244 patients. Training the CAD system on 108 patients (78 MR-positive prostate cancers and 105 benign MR-positive lesions), two validation studies were retrospectively performed on 136 patients (68 MR-positive prostate cancers, 111 benign MR-positive lesions, and 117 MR-negative benign lesions). Results: In distinguishing cancer from MR-positive benign lesions, an area under receiver operating characteristic curve (AUC) of 0.83 [95% confidence interval (CI): 0.76–0.89] was achieved. For cancer vs MR-positive or MR-negative benign lesions, the authors obtained an AUC of 0.89 AUC (95% CI: 0.84–0.93). The performance of the CAD system was not dependent on the specific regions of the prostate, e.g., a peripheral zone or transition zone. Moreover, the CAD system outperformed other combinations of MRI sequences: T2W MRI, high-b-value DWI, and the standard apparent diffusion coefficient (ADC) map of DWI. Conclusions: The novel CAD system is able to detect the discriminative texture features for cancer detection and localization and is a promising tool for improving the quality and efficiency of prostate cancer diagnosis.

Journal ArticleDOI
TL;DR: This work provides an investigator the necessary information to benchmark his/her Monte Carlo simulation software against the reference cases included here before performing his or her own novel research.
Abstract: The use of Monte Carlo simulations in diagnostic medical imaging research is widespread due to its flexibility and ability to estimate quantities that are challenging to measure empirically. However, any new Monte Carlo simulation code needs to be validated before it can be used reliably. The type and degree of validation required depends on the goals of the research project, but, typically, such validation involves either comparison of simulation results to physical measurements or to previously published results obtained with established Monte Carlo codes. The former is complicated due to nuances of experimental conditions and uncertainty, while the latter is challenging due to typical graphical presentation and lack of simulation details in previous publications. In addition, entering the field of Monte Carlo simulations in general involves a steep learning curve. It is not a simple task to learn how to program and interpret a Monte Carlo simulation, even when using one of the publicly available code packages. This Task Group report provides a common reference for benchmarking Monte Carlo simulations across a range of Monte Carlo codes and simulation scenarios. In the report, all simulation conditions are provided for six different Monte Carlo simulation cases that involve common x-ray based imaging research areas. The results obtained for the six cases using four publicly available Monte Carlo software packages are included in tabular form. In addition to a full description of all simulation conditions and results, a discussion and comparison of results among the Monte Carlo packages and the lessons learned during the compilation of these results are included. This abridged version of the report includes only an introductory description of the six cases and a brief example of the results of one of the cases. This work provides an investigator the necessary information to benchmark his/her Monte Carlo simulation software against the reference cases included here before performing his/her own novel research. In addition, an investigator entering the field of Monte Carlo simulations can use these descriptions and results as a self-teaching tool to ensure that he/she is able to perform a specific simulation correctly. Finally, educators can assign these cases as learning projects as part of course objectives or training programs.

Journal ArticleDOI
TL;DR: In this paper, a longitudinal adaptive compressed sensing MRI (LACS-MRI) scheme was proposed to accelerate the recovery of 2D and 3D MRI scans of patients with brain tumors.
Abstract: Purpose: Repeated brain MRI scans are performed in many clinical scenarios, such as follow up of patients with tumors and therapy response assessment. In this paper, the authors show an approach to utilize former scans of the patient for the acceleration of repeated MRI scans. Methods: The proposed approach utilizes the possible similarity of the repeated scans in longitudinal MRI studies. Since similarity is not guaranteed, sampling and reconstruction are adjusted during acquisition to match the actual similarity between the scans. The baseline MR scan is utilized both in the sampling stage, via adaptive sampling, and in the reconstruction stage, with weighted reconstruction. In adaptive sampling, k-space sampling locations are optimized during acquisition. Weighted reconstruction uses the locations of the nonzero coefficients in the sparse domains as a prior in the recovery process. The approach was tested on 2D and 3D MRI scans of patients with brain tumors. Results: The longitudinal adaptive compressed sensing MRI (LACS-MRI) scheme provides reconstruction quality which outperforms other CS-based approaches for rapid MRI. Examples are shown on patients with brain tumors and demonstrate improved spatial resolution. Compared with data sampled at the Nyquist rate, LACS-MRI exhibits signal-to-error ratio (SER) of 24.8 dB with undersampling factor of 16.6 in 3D MRI. Conclusions: The authors presented an adaptive method for image reconstruction utilizing similarity of scans in longitudinal MRI studies, where possible. The proposed approach can significantly reduce scanning time in many applications that consist of disease follow-up and monitoring of longitudinal changes in brain MRI.

Journal ArticleDOI
TL;DR: In spite of all methods resulting in comparable geometrical matching, the choice of DIR implementation leads to uncertainties in dose warped, particularly in regions of high gradient and/or poor imaging quality.
Abstract: Purpose: The aims of this work were to evaluate the performance of several deformable image registration (DIR) algorithms implemented in our in-house software (NiftyReg) and the uncertainties inherent to using different algorithms for dose warping. Methods: The authors describe a DIR based adaptive radiotherapy workflow, using CT and cone-beam CT (CBCT) imaging. The transformations that mapped the anatomy between the two time points were obtained using four different DIR approaches available in NiftyReg. These included a standard unidirectional algorithm and more sophisticated bidirectional ones that encourage or ensure inverse consistency. The forward (CT-to-CBCT) deformation vector fields (DVFs) were used to propagate the CT Hounsfield units and structures to the daily geometry for “dose of the day” calculations, while the backward (CBCT-to-CT) DVFs were used to remap the dose of the day onto the planning CT (pCT). Data from five head and neck patients were used to evaluate the performance of each implementation based on geometrical matching, physical properties of the DVFs, and similarity between warped dose distributions. Geometrical matching was verified in terms of dice similarity coefficient (DSC), distance transform, false positives, and false negatives. The physical properties of the DVFs were assessed calculating the harmonic energy, determinant of the Jacobian, and inverse consistency error of the transformations. Dose distributions were displayed on the pCT dose space and compared using dose difference (DD), distance to dose difference, and dose volume histograms. Results: All the DIR algorithms gave similar results in terms of geometrical matching, with an average DSC of 0.85 ± 0.08, but the underlying properties of the DVFs varied in terms of smoothness and inverse consistency. When comparing the doses warped by different algorithms, we found a root mean square DD of 1.9% ± 0.8% of the prescribed dose (pD) and that an average of 9% ± 4% of voxels within the treated volume failed a 2%pD DD-test (DD2%-pp). Larger DD2%-pp was found within the high dose gradient (21% ± 6%) and regions where the CBCT quality was poorer (28% ± 9%). The differences when estimating the mean and maximum dose delivered to organs-at-risk were up to 2.0%pD and 2.8%pD, respectively. Conclusions: The authors evaluated several DIR algorithms for CT-to-CBCT registrations. In spite of all methods resulting in comparable geometrical matching, the choice of DIR implementation leads to uncertainties in dose warped, particularly in regions of high gradient and/or poor imaging quality.

Journal ArticleDOI
TL;DR: With the proposed atlas ranking algorithm and joint label fusion, the proposed multiatlas segmentation scheme is able to generate accurate segmentation within practically acceptable computation time and can be useful for the development of new clinical applications of cardiac CT.
Abstract: Purpose: Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluating the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Methods: Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors’ proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. Results: The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve with higher WHS Dice scores compared to the conventional schemes (p < 0.03). In the atlas database study, the authors showed that the MAS using larger atlas databases generated better performance curves than the MAS using smaller ones, indicating larger atlas databases could produce more accurate segmentation. Conclusions: The authors have developed a new MAS framework for automatic WHS of CTA and investigated alternative implementations of MAS. With the proposed atlas ranking algorithm and joint label fusion, the MAS scheme is able to generate accurate segmentation within practically acceptable computation time. This method can be useful for the development of new clinical applications of cardiac CT.