scispace - formally typeset
Search or ask a question

Showing papers in "Medical Physics in 2016"


Journal ArticleDOI
TL;DR: This report describes the methodology and nomenclature developed, presents the process maps, FMEAs, fault trees, and QM programs developed, and makes suggestions on how this information could be used in the clinic.
Abstract: The increasing complexity of modern radiation therapy planning and delivery challenges traditional prescriptive quality management (QM) methods, such as many of those included in guidelines published by organizations such as the AAPM, ASTRO, ACR, ESTRO, and IAEA. These prescriptive guidelines have traditionally focused on monitoring all aspects of the functional performance of radiotherapy (RT) equipment by comparing parameters against tolerances set at strict but achievable values. Many errors that occur in radiation oncology are not due to failures in devices and software; rather they are failures in workflow and process. A systematic understanding of the likelihood and clinical impact of possible failures throughout a course of radiotherapy is needed to direct limit QM resources efficiently to produce maximum safety and quality of patient care. Task Group 100 of the AAPM has taken a broad view of these issues and has developed a framework for designing QM activities, based on estimates of the probability of identified failures and their clinical outcome through the RT planning and delivery process. The Task Group has chosen a specific radiotherapy process required for “intensity modulated radiation therapy (IMRT)” as a case study. The goal of this work is to apply modern risk-based analysis techniques to this complex RT process in order to demonstrate to the RT community that such techniques may help identify more effective and efficient ways to enhance the safety and quality of our treatment processes. The task group generated by consensus an example quality management program strategy for the IMRT process performed at the institution of one of the authors. This report describes the methodology and nomenclature developed, presents the process maps, FMEAs, fault trees, and QM programs developed, and makes suggestions on how this information could be used in the clinic. The development and implementation of risk-assessment techniques will make radiation therapy safer and more efficient.

327 citations


Journal ArticleDOI
TL;DR: Large data sets collected from mammography are useful for developing new CAD systems for DBT, alleviating the problem and effort of collecting entirely new large data sets for the new modality.
Abstract: Purpose: Develop a computer-aided detection (CAD) system for masses in digital breast tomosynthesis (DBT) volume using a deep convolutional neural network (DCNN) with transfer learning from mammograms. Methods: A data set containing 2282 digitized film and digital mammograms and 324 DBT volumes were collected with IRB approval. The mass of interest on the images was marked by an experienced breast radiologist as reference standard. The data set was partitioned into a training set (2282 mammograms with 2461 masses and 230 DBT views with 228 masses) and an independent test set (94 DBT views with 89 masses). For DCNN training, the region of interest (ROI) containing the mass (true positive) was extracted from each image. False positive (FP) ROIs were identified at prescreening by their previously developed CAD systems. After data augmentation, a total of 45 072 mammographic ROIs and 37 450 DBT ROIs were obtained. Data normalization and reduction of non-uniformity in the ROIs across heterogeneous data was achieved using a background correction method applied to each ROI. A DCNN with four convolutional layers and three fully connected (FC) layers was first trained on the mammography data. Jittering and dropout techniques were used to reduce overfitting. After training with the mammographic ROIs, all weights in the first three convolutional layers were frozen, and only the last convolution layer and the FC layers were randomly initialized again and trained using the DBT training ROIs. The authors compared the performances of two CAD systems for mass detection in DBT: one used the DCNN-based approach and the other used their previously developed feature-based approach for FP reduction. The prescreening stage was identical in both systems, passing the same set of mass candidates to the FP reduction stage. For the feature-based CAD system, 3D clustering and active contour method was used for segmentation; morphological, gray level, and texture features were extracted and merged with a linear discriminant classifier to score the detected masses. For the DCNN-based CAD system, ROIs from five consecutive slices centered at each candidate were passed through the trained DCNN and a mass likelihood score was generated. The performances of the CAD systems were evaluated using free-response ROC curves and the performance difference was analyzed using a non-parametric method. Results: Before transfer learning, the DCNN trained only on mammograms with an AUC of 0.99 classified DBT masses with an AUC of 0.81 in the DBT training set. After transfer learning with DBT, the AUC improved to 0.90. For breast-based CAD detection in the test set, the sensitivity for the feature-based and the DCNN-based CAD systems was 83% and 91%, respectively, at 1 FP/DBT volume. The difference between the performances for the two systems was statistically significant (p-value < 0.05). Conclusions: The image patterns learned from the mammograms were transferred to the mass detection on DBT slices through the DCNN. This study demonstrated that large data sets collected from mammography are useful for developing new CAD systems for DBT, alleviating the problem and effort of collecting entirely new large data sets for the new modality.

244 citations


Journal ArticleDOI
TL;DR: The authors demonstrated that the DL-CNN can overcome the strong boundary between two regions that have large difference in gray levels and provides a seamless mask to guide level set segmentation, which has been a problem for many gradient-based segmentation methods.
Abstract: Purpose: The authors are developing a computerized system for bladder segmentation in CT urography (CTU) as a critical component for computer-aided detection of bladder cancer. Methods: A deep-learning convolutional neural network (DL-CNN) was trained to distinguish between the inside and the outside of the bladder using 160 000 regions of interest (ROI) from CTU images. The trained DL-CNN was used to estimate the likelihood of an ROI being inside the bladder for ROIs centered at each voxel in a CTU case, resulting in a likelihood map. Thresholding and hole-filling were applied to the map to generate the initial contour for the bladder, which was then refined by 3D and 2D level sets. The segmentation performance was evaluated using 173 cases: 81 cases in the training set (42 lesions, 21 wall thickenings, and 18 normal bladders) and 92 cases in the test set (43 lesions, 36 wall thickenings, and 13 normal bladders). The computerized segmentation accuracy using the DL likelihood map was compared to that using a likelihood map generated by Haar features and a random forest classifier, and that using our previous conjoint level set analysis and segmentation system (CLASS) without using a likelihood map. All methods were evaluated relative to the 3D hand-segmented reference contours. Results: With DL-CNN-based likelihood map and level sets, the average volume intersection ratio, average percent volume error, average absolute volume error, average minimum distance, and the Jaccard index for the test set were 81.9% ± 12.1%, 10.2% ± 16.2%, 14.0% ± 13.0%, 3.6 ± 2.0 mm, and 76.2% ± 11.8%, respectively. With the Haar-feature-based likelihood map and level sets, the corresponding values were 74.3% ± 12.7%, 13.0% ± 22.3%, 20.5% ± 15.7%, 5.7 ± 2.6 mm, and 66.7% ± 12.6%, respectively. With our previous CLASS with local contour refinement (LCR) method, the corresponding values were 78.0% ± 14.7%, 16.5% ± 16.8%, 18.2% ± 15.0%, 3.8 ± 2.3 mm, and 73.9% ± 13.5%, respectively. Conclusions: The authors demonstrated that the DL-CNN can overcome the strong boundary between two regions that have large difference in gray levels and provides a seamless mask to guide level set segmentation, which has been a problem for many gradient-based segmentation methods. Compared to our previous CLASS with LCR method, which required two user inputs to initialize the segmentation, DL-CNN with level sets achieved better segmentation performance while using a single user input. Compared to the Haar-feature-based likelihood map, the DL-CNN-based likelihood map could guide the level sets to achieve better segmentation. The results demonstrate the feasibility of our new approach of using DL-CNN in combination with level sets for segmentation of the bladder.

216 citations


Journal ArticleDOI
TL;DR: An improved FP-reduction scheme using CNN technique has been developed for the detection of pulmonary nodules in PET/CT images by means of convolutional neural networks (CNNs) and results indicate that the method may be useful in the computer-aided detection ofmonary nodules using PET/ CT images.
Abstract: Purpose: Automated detection of solitary pulmonary nodules using positron emission tomography(PET) and computed tomography(CT)images shows good sensitivity; however, it is difficult to detect nodules in contact with normal organs, and additional efforts are needed so that the number of false positives (FPs) can be further reduced. In this paper, the authors propose an improved FP-reduction method for the detection of pulmonary nodules in PET/CT images by means of convolutional neural networks(CNNs). Methods: The overall scheme detects pulmonary nodules using both CT and PETimages. In the CTimages, a massive region is first detected using an active contour filter, which is a type of contrast enhancement filter that has a deformable kernel shape. Subsequently, high-uptake regions detected by the PETimages are merged with the regions detected by the CTimages. FP candidates are eliminated using an ensemble method; it consists of two feature extractions, one by shape/metabolic feature analysis and the other by a CNN, followed by a two-step classifier, one step being rule based and the other being based on support vector machines. Results: The authors evaluated the detection performance using 104 PET/CT images collected by a cancer-screening program. The sensitivity in detecting candidates at an initial stage was 97.2%, with 72.8 FPs/case. After performing the proposed FP-reduction method, the sensitivity of detection was 90.1%, with 4.9 FPs/case; the proposed method eliminated approximately half the FPs existing in the previous study. Conclusions: An improved FP-reduction scheme using CNN technique has been developed for the detection of pulmonary nodules in PET/CT images. The authors’ ensemble FP-reduction method eliminated 93% of the FPs; their proposed method using CNN technique eliminates approximately half the FPs existing in the previous study. These results indicate that their method may be useful in the computer-aided detection of pulmonary nodules using PET/CT images.

208 citations


Journal ArticleDOI
TL;DR: Performance measurements of the ToF-PET whole body GE SIGNA PET/MR system indicate that it is a promising new simultaneous imaging platform.
Abstract: Purpose: The GE SIGNA PET/MR is a new whole body integrated time-of-flight (ToF)-PET/MR scanner from GE Healthcare. The system is capable of simultaneous PET and MR image acquisition with sub-400 ps coincidence time resolution. Simultaneous PET/MR holds great potential as a method of interrogating molecular, functional, and anatomical parameters in clinical disease in one study. Despite the complementary imaging capabilities of PET and MRI, their respective hardware tends to be incompatible due to mutual interference. In this work, the GE SIGNA PET/MR is evaluated in terms of PET performance and the potential effects of interference from MRI operation. Methods: The NEMA NU 2-2012 protocol was followed to measure PET performance parameters including spatial resolution, noise equivalent count rate, sensitivity, accuracy, and image quality. Each of these tests was performed both with the MR subsystem idle and with continuous MR pulsing for the duration of the PET data acquisition. Most measurements were repeated at three separate test sites where the system is installed. Results: The scanner has achieved an average of 4.4, 4.1, and 5.3 mm full width at half maximum radial, tangential, and axial spatial resolutions, respectively, at 1 cm from the transaxial FOV center. The peak noise equivalent count rate (NECR) of 218 kcps and a scatter fraction of 43.6% are reached at an activity concentration of 17.8 kBq/ml. Sensitivity at the center position is 23.3 cps/kBq. The maximum relative slice count rate error below peak NECR was 3.3%, and the residual error from attenuation and scatter corrections was 3.6%. Continuous MR pulsing had either no effect or a minor effect on each measurement. Conclusions: Performance measurements of the ToF-PET whole body GE SIGNA PET/MR system indicate that it is a promising new simultaneous imaging platform.

199 citations


Journal ArticleDOI
TL;DR: A computational toolkit has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on The Mathworks, Natick, MA function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements.
Abstract: Purpose: A computational toolkit (spektr 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a matlab (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. Methods: The spektr code generates x-ray spectra (photons/mm2/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins over beam energies 20–150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. Results: The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30–140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. Conclusions: The computational toolkit, spektr, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the spektr function library, UI, and optimization tool are available.

164 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a formalism for reference dosimetry with integrated MRIgRT devices by using magnetic field correction factors, but care must be taken with the choice of beam quality specifier and chamber orientation.
Abstract: Purpose: Magnetic resonance imaging–guided radiotherapy (MRIgRT) provides superior soft-tissue contrast and real-time imaging compared with standard image-guided RT, which uses x-ray based imaging. Several groups are developing integrated MRIgRT machines. Reference dosimetry with these new machines requires accounting for the effects of the magnetic field on the response of the ionization chambers used for dose calibration. Here, the authors propose a formalism for reference dosimetry with integrated MRIgRT devices. The authors also examined the suitability of the TPR 10 20 and %dd(10)x beam quality specifiers in the presence of magnetic fields and calculated detector correction factors to account for the effects of the magnetic field for a range of detectors. Methods: The authors used full-head and point-source Monte Carlo models of an MR-linac along with detailed detector models of an Exradin A19, an NE2571, and several PTW Farmer chambers to calculate magnetic field correction factors for six commercial ionization chambers in three chamber configurations. Calculations of ionization chamber response (performed with geant4) were validated with specialized Fano cavity tests. %dd(10)x values, TPR 10 20 values, and Spencer-Attix water-to-air restricted stopping power ratios were also calculated. The results were further validated against measurements made with a preclinical functioning MR-linac. Results: The TPR 10 20 was found to be insensitive to the presence of the magnetic field, whereas the relative change in %dd(10)x was 2.4% when a transverse 1.5 T field was applied. The parameters chosen for the ionization chamber calculations passed the Fano cavity test to within ∼0.1%. Magnetic field correction factors varied in magnitude with detector orientation with the smallest corrections found when the chamber was parallel to the magnetic field. Conclusions: Reference dosimetry can be performed with integrated MRIgRT devices by using magnetic field correction factors, but care must be taken with the choice of beam quality specifier and chamber orientation. The uncertainties achievable under this formalism should be similar to those of conventional formalisms, although this must be further quantified.

132 citations


Journal ArticleDOI
TL;DR: Results show that supervised machine learning of MRI texture features can predict MGMT methylation status in preoperative GBM tumors, thus providing a new noninvasive imaging biomarker.
Abstract: Purpose: Imaging biomarker research focuses on discovering relationships between radiological features and histological findings. In glioblastoma patients, methylation of the O6-methylguanine methyltransferase (MGMT) gene promoter is positively correlated with an increased effectiveness of current standard of care. In this paper, the authors investigate texture features as potential imaging biomarkers for capturing the MGMT methylation status of glioblastoma multiforme (GBM) tumors when combined with supervised classification schemes. Methods: A retrospective study of 155 GBM patients with known MGMT methylation status was conducted. Co-occurrence and run length texture features were calculated, and both support vector machines (SVMs) and random forest classifiers were used to predict MGMT methylation status. Results: The best classification system (an SVM-based classifier) had a maximum area under the receiver-operating characteristic (ROC) curve of 0.85 (95% CI: 0.78–0.91) using four texture features (correlation, energy, entropy, and local intensity) originating from the T2-weighted images, yielding at the optimal threshold of the ROC curve, a sensitivity of 0.803 and a specificity of 0.813. Conclusions: Results show that supervised machine learning of MRI texture features can predict MGMT methylation status in preoperative GBM tumors, thus providing a new noninvasive imaging biomarker.

131 citations


Journal ArticleDOI
TL;DR: This session will review the outcomes and lessons learned from the 2015 SPIE-AAPM-NCI Lung Nodule Classification Challenge, and turn to the 2016 NIH-AapM-Mayo Clinic Low Dose CT Grand Challenge, providing an overview of denoising and iterative reconstruction approaches and a description of the Challenge.
Abstract: Peer-reviewed journals and conference proceedings publish hundreds of papers that describe new medical imaging algorithms, including, for example, techniques for computer-aided diagnosis or characterization, segmentation, image registration, image reconstruction, and radiomics. It is difficult, if not impossible, to fairly compare the performance of these algorithms as investigators must either use different data sets, or if using the same data, use different implementations of competing algorithms. Grand Challenges facilitate the fair comparison of algorithms by providing a common data set to all participants and by having each participant be responsible for implementation of their own algorithm. The dissemination of findings from Grand Challenges provides important information to the scientific community and helps to determine which approaches have the greatest promise for successful translation to clinical practice. In this session we will review the outcomes and lessons learned from the 2015 SPIE-AAPM-NCI Lung Nodule Classification Challenge. We will then turn to the 2016 NIH-AAPM-Mayo Clinic Low Dose CT Grand Challenge, providing an overview of denoising and iterative reconstruction approaches and a description of the Challenge. The top 3 performing participants will be announced, and each will give a short presentation on their technique. Learning Objectives: 1.Understand the role of Grand Challenges in the field of medical imaging 2.Be able to summarize the outcomes of the 2015 lung nodule classification challenge 3.Be able to review the primary types of noise reduction techniques used in CT 4.Be familiar with a library of patient CT projection data available to researchers 5.Learn which techniques performed best in the Low Dose CT Grand Challenge Pelc: GE Healthcare, Philips Healthcare; McCollough: Research grant, Siemens Healthcare; Low Dose CT Grand Challenge supported by the AAPM Science Council and NIH (grant EB 017185), and hosted by the Mayo Clinic; Giger: stockholder R2 technology/Hologic, royalties from Hologic, GE Medical Systems, MEDIAN Technologies, Riverain Medical, Mitsubishi/Toshiba. Cofounder/stockholder Quantitative Insights.

128 citations


Journal ArticleDOI
TL;DR: The advantages and drawbacks of each approach for addressing the challenges of MR-based attenuation correction are comprehensively described and the opportunities brought by both MRI and PET imaging modalities for deriving accurate attenuation maps and improving PET quantification will be elaborated.
Abstract: Attenuation correction is an essential component of the long chain of data correction techniques required to achieve the full potential of quantitative positron emission tomography (PET) imaging. The development of combined PET/magnetic resonance imaging (MRI) systems mandated the widespread interest in developing novel strategies for deriving accurate attenuation maps with the aim to improve the quantitative accuracy of these emerging hybrid imaging systems. The attenuation map in PET/MRI should ideally be derived from anatomical MR images; however, MRI intensities reflect proton density and relaxation time properties of biological tissues rather than their electron density and photon attenuation properties. Therefore, in contrast to PET/computed tomography, there is a lack of standardized global mapping between the intensities of MRI signal and linear attenuation coefficients at 511 keV. Moreover, in standard MRI sequences, bones and lung tissues do not produce measurable signals owing to their low proton density and short transverse relaxation times. MR images are also inevitably subject to artifacts that degrade their quality, thus compromising their applicability for the task of attenuation correction in PET/MRI. MRI-guided attenuation correction strategies can be classified in three broad categories: (i) segmentation-based approaches, (ii) atlas-registration and machine learning methods, and (iii) emission/transmission-based approaches. This paper summarizes past and current state-of-the-art developments and latest advances in PET/MRI attenuation correction. The advantages and drawbacks of each approach for addressing the challenges of MR-based attenuation correction are comprehensively described. The opportunities brought by both MRI and PET imaging modalities for deriving accurate attenuation maps and improving PET quantification will be elaborated. Future prospects and potential clinical applications of these techniques and their integration in commercial systems will also be discussed.

125 citations


Journal ArticleDOI
TL;DR: The Virtual QA process predicts IMRT passing rates with a high likelihood, allows the detection of failures due to setup errors, and it is sensitive enough to detect small differences between matched Linacs.
Abstract: Purpose: It is common practice to perform patient-specific pretreatment verifications to the clinical delivery of IMRT. This process can be time-consuming and not altogether instructive due to the myriad sources that may produce a failing result. The purpose of this study was to develop an algorithm capable of predicting IMRT QA passing rates a priori. Methods: From all treatment, 498 IMRT plans sites were planned in eclipse version 11 and delivered using a dynamic sliding window technique on Clinac iX or TrueBeam Linacs. 3%/3 mm local dose/distance-to-agreement (DTA) was recorded using a commercial 2D diode array. Each plan was characterized by 78 metrics that describe different aspects of their complexity that could lead to disagreements between the calculated and measured dose. A Poisson regression with Lasso regularization was trained to learn the relation between the plan characteristics and each passing rate. Results: Passing rates 3%/3 mm local dose/DTA can be predicted with an error smaller than 3% for all plans analyzed. The most important metrics to describe the passing rates were determined to be the MU factor (MU per Gy), small aperture score, irregularity factor, and fraction of the plan delivered at the corners of a 40 × 40 cm field. The higher the value of these metrics, the worse the passing rates. Conclusions: The Virtual QA process predicts IMRT passing rates with a high likelihood, allows the detection of failures due to setup errors, and it is sensitive enough to detect small differences between matched Linacs.

Journal ArticleDOI
TL;DR: An automated method to identify the best-quality coronary arterial segment from multiple-phase coronary CT angiography (cCTA) acquisitions is developed, enabling interpreting physicians to fully utilize the best available information in cCTA for diagnosis of coronary disease, without requiring manual search through the multiple phases.
Abstract: Purpose: The authors are developing an automated method to identify the best-quality coronary arterial segment from multiple-phase coronary CT angiography (cCTA) acquisitions, which may be used by either interpreting physicians or computer-aided detection systems to optimally and efficiently utilize the diagnostic information available in multiple-phase cCTA for the detection of coronary artery disease. Methods: After initialization with a manually identified seed point, each coronary artery tree is automatically extracted from multiple cCTA phases using our multiscale coronary artery response enhancement and 3D rolling balloon region growing vessel segmentation and tracking method. The coronary artery trees from multiple phases are then aligned by a global registration using an affine transformation with quadratic terms and nonlinear simplex optimization, followed by a local registration using a cubic B-spline method with fast localized optimization. The corresponding coronary arteries among the available phases are identified using a recursive coronary segment matching method. Each of the identified vessel segments is transformed by the curved planar reformation (CPR) method. Four features are extracted from each corresponding segment as quality indicators in the original computed tomography volume and the straightened CPR volume, and each quality indicator is used as a voting classifier for the arterial segment. A weighted voting ensemble (WVE) classifier is designed to combine the votes of the four voting classifiers for each corresponding segment. The segment with the highest WVE vote is then selected as the best-quality segment. In this study, the training and test sets consisted of 6 and 20 cCTA cases, respectively, each with 6 phases, containing a total of 156 cCTA volumes and 312 coronary artery trees. An observer preference study was also conducted with one expert cardiothoracic radiologist and four nonradiologist readers to visually rank vessel segment quality. The performance of our automated method was evaluated by comparing the automatically identified best-quality segments identified by the computer to those selected by the observers. Results: For the 20 test cases, 254 groups of corresponding vessel segments were identified after multiple phase registration and recursive matching. The AI-BQ segments agreed with the radiologist’s top 2 ranked segments in 78.3% of the 254 groups (Cohen’s kappa 0.60), and with the 4 nonradiologist observers in 76.8%, 84.3%, 83.9%, and 85.8% of the 254 groups. In addition, 89.4% of the AI-BQ segments agreed with at least two observers’ top 2 rankings, and 96.5% agreed with at least one observer’s top 2 rankings. In comparison, agreement between the four observers’ top ranked segment and the radiologist’s top 2 ranked segments were 79.9%, 80.7%, 82.3%, and 76.8%, respectively, with kappa values ranging from 0.56 to 0.68. Conclusions: The performance of our automated method for selecting the best-quality coronary segments from a multiple-phase cCTA acquisition was comparable to the selection made by human observers. This study demonstrates the potential usefulness of the automated method in clinical practice, enabling interpreting physicians to fully utilize the best available information in cCTA for diagnosis of coronary disease, without requiring manual search through the multiple phases and minimizing the variability in image phase selection for evaluation of coronary artery segments across the diversity of human readers with variations in expertise.

Journal ArticleDOI
TL;DR: The capability of focused ultrasound (FUS) neuromodulation in the megahertz-range to achieve superior targeting specificity in the murine brain as well as demonstrate modulation of both motor and sensory responses demonstrates the capability of FUS to perform functional brain mapping.
Abstract: Purpose: Ultrasound neuromodulation is a promising noninvasive technique for controlling neural activity. Previous small animal studies suffered from low targeting specificity because of the low ultrasound frequencies (<690 kHz) used. In this study, the authors demonstrated the capability of focused ultrasound (FUS) neuromodulation in the megahertz-range to achieve superior targeting specificity in the murine brain as well as demonstrate modulation of both motor and sensory responses. Methods: FUS sonications were carried out at 1.9 MHz with 50% duty cycle, pulse repetition frequency of 1 kHz, and duration of 1 s. The robustness of the FUS neuromodulation was assessed first in sensorimotor cortex, where elicited motor activities were observed and recorded on videos and electromyography. Deeper brain regions were then targeted where pupillary dilation served as an indicative of successful modulation of subcortical brain structures. Results: Contralateral and ipsilateral movements of the hind limbs were repeatedly observed when the FUS was targeted at the sensorimotor cortex. Induced trunk and tail movements were also observed at different coordinates inside the sensorimotor cortex. At deeper targeted-structures, FUS induced eyeball movements (superior colliculus) and pupillary dilation (pretectal nucleus, locus coeruleus, and hippocampus). Histological analysis revealed no tissue damage associated with the FUS sonications. Conclusions: The motor movements and pupillary dilation observed in this study demonstrate the capability of FUS to modulate cortical and subcortical brain structures without inducing any damage. The variety of responses observed here demonstrates the capability of FUS to perform functional brain mapping.

Journal ArticleDOI
TL;DR: The use of thyroid cad to differentiate malignant from benign lesions shows accuracy similar to that obtained via visual inspection by radiologists, and might be considered a viable way to generate a second opinion for radiologists in clinical practice.
Abstract: Purpose: To develop a semiautomated computer-aided diagnosis (cad) system for thyroid cancer using two-dimensional ultrasound images that can be used to yield a second opinion in the clinic to differentiate malignant and benign lesions. Methods: A total of 118 ultrasound images that included axial and longitudinal images from patients with biopsy-confirmed malignant (n = 30) and benign (n = 29) nodules were collected. Thyroid cad software was developed to extract quantitative features from these images based on thyroid nodule segmentation in which adaptive diffusion flow for active contours was used. Various features, including histogram, intensity differences, elliptical fit, gray-level co-occurrence matrixes, and gray-level run-length matrixes, were evaluated for each region imaged. Based on these imaging features, a support vector machine (SVM) classifier was used to differentiate benign and malignant nodules. Leave-one-out cross-validation with sequential forward feature selection was performed to evaluate the overall accuracy of this method. Additionally, analyses with contingency tables and receiver operating characteristic (ROC) curves were performed to compare the performance of cad with visual inspection by expert radiologists based on established gold standards. Results: Most univariate features for this proposed cad system attained accuracies that ranged from 78.0% to 83.1%. When optimal SVM parameters that were established using a grid search method with features that radiologists use for visual inspection were employed, the authors could attain rates of accuracy that ranged from 72.9% to 84.7%. Using leave-one-out cross-validation results in a multivariate analysis of various features, the highest accuracy achieved using the proposed cad system was 98.3%, whereas visual inspection by radiologists reached 94.9% accuracy. To obtain the highest accuracies, “axial ratio” and “max probability” in axial images were most frequently included in the optimal feature sets for the authors' proposed cad system, while “shape” and “calcification” in longitudinal images were most frequently included in the optimal feature sets for visual inspection by radiologists. The computed areas under curves in the ROC analysis were 0.986 and 0.979 for the proposed cad system and visual inspection by radiologists, respectively; no significant difference was detected between these groups. Conclusions: The use of thyroid cad to differentiate malignant from benign lesions shows accuracy similar to that obtained via visual inspection by radiologists. Thyroid cad might be considered a viable way to generate a second opinion for radiologists in clinical practice.

Journal ArticleDOI
TL;DR: A simple simulation model for therapeutical (4)He beams is presented, which is validated experimentally by means of physical and biological dosimetries, and it is now possible to perform detailed treatment planning studies with (4]He beams, either exclusively or in combination with other ion modalities.
Abstract: Purpose: Modern facilities for actively scanned ion beam radiotherapy allow in principle the use of helium beams, which could present specific advantages, especially for pediatric tumors. In order to assess the potential use of these beams for radiotherapy, i.e., to create realistic treatment plans, the authors set up a dedicated 4He beammodel, providing base data for their treatment planning system TRiP98, and they have reported that in this work together with its physical and biological validations. Methods: A semiempirical beammodel for the physical depth dose deposition and the production of nuclear fragments was developed and introduced in TRiP98. For the biological effect calculations the last version of the local effect model was used. The model predictions were experimentally verified at the HIT facility. The primary beam attenuation and the characteristics of secondary charged particles at various depth in water were investigated using 4He ion beams of 200 MeV/u. The nuclear charge of secondary fragments was identified using a ΔE/E telescope. 3D absorbed dose distributions were measured with pin point ionization chambers and the biological dosimetry experiments were realized irradiating a Chinese hamster ovary cells stack arranged in an extended target. Results: The few experimental data available on basic physical processes are reproduced by their beammodel. The experimental verification of absorbed dose distributions in extended target volumes yields an overall agreement, with a slight underestimation of the lateral spread. Cell survival along a 4 cm extended target is reproduced with remarkable accuracy. Conclusions: The authors presented a simple simulation model for therapeutical 4He beams which they introduced in TRiP98, and which is validated experimentally by means of physical and biological dosimetries. Thus, it is now possible to perform detailed treatment planning studies with 4He beams, either exclusively or in combination with other ion modalities.

Journal ArticleDOI
TL;DR: This study aims at performing proton therapy TP on SECT and DECT head images of the same patients and evaluating whether the reported improved DECT SPR accuracy translates into clinically relevant range shifts in clinical head treatment scenarios.
Abstract: Purpose: Dual energy CT (DECT) has recently been proposed as an improvement over single energy CT (SECT) for stopping power ratio (SPR) estimation for proton therapy treatment planning (TP), thereby potentially reducing range uncertainties. Published literature investigated phantoms. This study aims at performing proton therapy TP on SECT and DECT head images of the same patients and at evaluating whether the reported improved DECT SPR accuracy translates into clinically relevant range shifts in clinical head treatment scenarios. Methods: Two phantoms were scanned at a last generation dual source DECT scanner at 90 and 150 kVp with Sn filtration. The first phantom (Gammex phantom) was used to calibrate the scanner in terms of SPR while the second served as evaluation (CIRS phantom). DECT images of five head trauma patients were used as surrogate cancer patient images for TP of proton therapy. Pencil beam algorithm based TP was performed on SECT and DECT images and the dose distributions corresponding to the optimized proton plans were calculated using a Monte Carlo (MC) simulation platform using the same patient geometry for both plans obtained from conversion of the 150 kVp images. Range shifts between the MC dose distributions from SECT and DECT plans were assessed using 2D range maps. Results: SPR root mean square errors (RMSEs) for the inserts of the Gammex phantom were 1.9%, 1.8%, and 1.2% for SECT phantom calibration (SECTphantom), SECT stoichiometric calibration (SECTstoichiometric), and DECT calibration, respectively. For the CIRS phantom, these were 3.6%, 1.6%, and 1.0%. When investigating patient anatomy, group median range differences of up to -1.4% were observed for head cases when comparing SECTstoichiometric with DECT. For this calibration the 25th and 75th percentiles varied from -2% to 0% across the five patients. The group median was found to be limited to 0.5% when using SECTphantom and the 25th and 75th percentiles varied from -1% to 2%. Conclusions: Proton therapy TP using a pencil beam algorithm and DECT images was performed for the first time. Given that the DECT accuracy as evaluated by two phantoms was 1.2% and 1.0% RMSE, it is questionable whether the range differences reported here are significant. (C) 2016 American Association of Physicists in Medicine.

Journal ArticleDOI
TL;DR: Using the vCT as prior, errors can be overcome and images suitable for accurate delineation and dose calculation in CBCT-based adaptive IMPT can be retrieved from scatter correction of the CBCT projections.
Abstract: Purpose This work aims at investigating intensity corrected cone-beam x-ray computed tomography (CBCT) images for accurate dose calculation in adaptive intensity modulated proton therapy (IMPT) for prostate and head and neck (H&N) cancer. A deformable image registration (DIR)-based method and a scatter correction approach using the image data obtained from DIR as prior are characterized and compared on the basis of the same clinical patient cohort for the first time. Methods Planning CT (pCT) and daily CBCT data (reconstructed images and measured projections) of four H&N and four prostate cancer patients have been considered in this study. A previously validated Morphons algorithm was used for DIR of the planning CT to the current CBCT image, yielding a so-called virtual CT (vCT). For the first time, this approach was translated from H&N to prostate cancer cases in the scope of proton therapy. The warped pCT images were also used as prior for scatter correction of the CBCT projections for both tumor sites. Single field uniform dose and IMPT (only for H&N cases) treatment plans have been generated with a research version of a commercial planning system. Dose calculations on vCT and scatter corrected CBCT (CBCTcor) were compared by means of the proton range and a gamma-index analysis. For the H&N cases, an additional diagnostic replanning CT (rpCT) acquired within three days of the CBCT served as additional reference. For the prostate patients, a comprehensive contour comparison of CBCT and vCT, using a trained physician's delineation, was performed. Results A high agreement of vCT and CBCTcor was found in terms of the proton range and gamma-index analysis. For all patients and indications between 95% and 100% of the proton dose profiles in beam's eye view showed a range agreement of better than 3 mm. The pass rate in a (2%,2 mm) gamma-comparison was between 96% and 100%. For H&N patients, an equivalent agreement of vCT and CBCTcor to the reference rpCT was observed. However, for the prostate cases, an insufficient accuracy of the vCT contours retrieved from DIR was found, while the CBCTcor contours showed very high agreement to the contours delineated on the raw CBCT. Conclusions For H&N patients, no considerable differences of vCT and CBCTcor were found. For prostate cases, despite the high dosimetric agreement, the DIR yields incorrect contours, probably due to the more pronounced anatomical changes in the abdomen and the reduced soft-tissue contrast in the CBCT. Using the vCT as prior, these inaccuracies can be overcome and images suitable for accurate delineation and dose calculation in CBCT-based adaptive IMPT can be retrieved from scatter correction of the CBCT projections.

Journal ArticleDOI
TL;DR: A fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures and optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice.
Abstract: Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with gate/geant4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 107 primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

Journal ArticleDOI
TL;DR: This pilot study demonstrated that longitudinal diffusion MRI is feasible using the 0.35 T ViewRay MRI and may enable response-guided adaptive radiotherapy.
Abstract: Purpose: To demonstrate the preliminary feasibility of a longitudinal diffusionmagnetic resonance imaging(MRI) strategy for assessing patient response to radiotherapy at 0.35 T using an MRI-guided radiotherapy system (ViewRay). Methods: Six patients (three head and neck cancer, three sarcoma) who underwent fractionated radiotherapy were enrolled in this study. A 2D multislice spin echo single-shot echo planar imaging diffusion pulse sequence was implemented on the ViewRay system and tested in phantom studies. The same pulse sequence was used to acquire longitudinal diffusion data (every 2–5 fractions) on the six patients throughout the entire course of radiotherapy. The reproducibility of the apparent diffusion coefficient (ADC) measurements was assessed using reference regions and the temporal variations of the tumor ADC values were evaluated. Results: In diffusion phantom studies, the ADC values measured on the ViewRay system matched well with reference ADC values with <5% error for a range of ground truth diffusion coefficients of 0.4–1.1 × 10−3 mm2/s. The remote reference regions (i.e., brainstem in head and neck patients) had consistent ADC values throughout the therapy for all three head and neck patients, indicating acceptable reproducibility of the diffusion imaging sequence. The tumor ADC values changed throughout therapy, with the change differing between patients, ranging from a 40% drop in ADC within the first week of therapy to gradually increasing throughout therapy. For larger tumors, intratumoral heterogeneity was observed. For one sarcoma patient, postradiotherapy biopsy showed less than 10% necrosis score, which correlated with the observed 40% decrease in ADC from the fifth fraction to the eighth treatment fraction. Conclusions: This pilot study demonstrated that longitudinal diffusionMRI is feasible using the 0.35 T ViewRay MRI. Larger patient cohort studies are warranted to correlate the longitudinal diffusion measurements to patient outcomes. Such an approach may enable response-guided adaptive radiotherapy.

Journal ArticleDOI
TL;DR: The phantoms described in this work simulate the mechanical, optical, and acoustic properties of human skin tissues, vessel tissue, and blood and are uniquely suited to serve as test models for multimodal imaging techniques and image-guided interventions.
Abstract: Purpose: This paper describes the design, fabrication, and characterization of multilayered tissue mimicking skin and vessel phantoms with tunable mechanical, optical, and acoustic properties. The phantoms comprise epidermis, dermis, and hypodermis skin layers, blood vessels, and blood mimicking fluid. Each tissue component may be individually tailored to a range of physiological and demographic conditions. Methods: The skin layers were constructed from varying concentrations of gelatin and agar. Synthetic melanin, India ink, absorbing dyes, and Intralipid were added to provide optical absorption and scattering in the skin layers. Bovine serum albumin was used to increase acoustic attenuation, and 40 μm diameter silica microspheres were used to induce acoustic backscatter. Phantom vessels consisting of thin-walled polydimethylsiloxane tubing were embedded at depths of 2–6 mm beneath the skin, and blood mimicking fluid was passed through the vessels. The phantoms were characterized through uniaxial compression and tension experiments, rheological frequency sweep studies, diffuse reflectance spectroscopy, and ultrasonic pulse-echo measurements. Results were then compared to in vivo and ex vivo literature data. Results: The elastic and dynamic shear behavior of the phantom skin layers and vessel wall closely approximated the behavior of porcine skin tissues and human vessels. Similarly, the optical properties of the phantom tissue components in the wavelength range of 400–1100 nm, as well as the acoustic properties in the frequency range of 2–9 MHz, were comparable to human tissue data. Normalized root mean square percent errors between the phantom results and the literature reference values ranged from 1.06% to 9.82%, which for many measurements were less than the sample variability. Finally, the mechanical and imaging characteristics of the phantoms were found to remain stable after 30 days of storage at 21 °C. Conclusions: The phantoms described in this work simulate the mechanical, optical, and acoustic properties of human skin tissues, vessel tissue, and blood. In this way, the phantoms are uniquely suited to serve as test models for multimodal imaging techniques and image-guided interventions.

Journal ArticleDOI
TL;DR: Developments using time-of-flight (TOF) PET emission data for AC have shown promising advances and open a wide range of applications that may both remedy deficiencies of purely MRI-based AC approaches in PET/MRI and improve standalone PET imaging.
Abstract: The problem of attenuation correction (AC) for quantitative positron emission tomography (PET) had been considered solved to a large extent after the commercial availability of devices combining PET with computed tomography (CT) in 2001; single photon emission computed tomography (SPECT) has seen a similar development. However, stimulated in particular by technical advances toward clinical systems combining PET and magnetic resonance imaging (MRI), research interest in alternative approaches for PET AC has grown substantially in the last years. In this comprehensive literature review, the authors first present theoretical results with relevance to simultaneous reconstruction of attenuation and activity. The authors then look back at the early history of this research area especially in PET; since this history is closely interwoven with that of similar approaches in SPECT, these will also be covered. We then review algorithmic advances in PET, including analytic and iterative algorithms. The analytic approaches are either based on the Helgason-Ludwig data consistency conditions of the Radon transform, or generalizations of John's partial differential equation; with respect to iterative methods, we discuss maximum likelihood reconstruction of attenuation and activity (MLAA), the maximum likelihood attenuation correction factors (MLACF) algorithm, and their offspring. The description of methods is followed by a structured account of applications for simultaneous reconstruction techniques: this discussion covers organ-specific applications, applications specific to PET/MRI, applications using supplemental transmission information, and motion-aware applications. After briefly summarizing SPECT applications, we consider recent developments using emission data other than unscattered photons. In summary, developments using time-of-flight (TOF) PET emission data for AC have shown promising advances and open a wide range of applications. These techniques may both remedy deficiencies of purely MRI-based AC approaches in PET/MRI and improve standalone PET imaging.

Journal ArticleDOI
TL;DR: A comparison between a fast, commercial, in-patient Monte Carlo dose calculation algorithm (GPUMCD) and geant4 is provided and the dosimetric impact of the application of an external 1.5 T magnetic field is evaluated.
Abstract: Purpose: This paper provides a comparison between a fast, commercial, in-patient Monte Carlodose calculation algorithm (GPUMCD) and geant4. It also evaluates the dosimetric impact of the application of an external 1.5 T magnetic field. Methods: A stand-alone version of the Elekta™ GPUMCD algorithm, to be used within the Monaco treatment planning system to model dose for the Elekta™ magnetic resonance imaging(MRI)Linac, was compared against geant4 (v10.1). This was done in the presence or absence of a 1.5 T static magnetic field directed orthogonally to the radiation beam axis. Phantoms with material compositions of water, ICRU lung, ICRU compact-bone, and titanium were used for this purpose. Beams with 2 MeV monoenergetic photons as well as a 7 MV histogrammed spectrum representing the MRILinac spectrum were emitted from a point source using a nominal source-to-surface distance of 142.5 cm. Field sizes ranged from 1.5 × 1.5 to 10 × 10 cm2. Dose scoring was performed using a 3D grid comprising 1 mm3 voxels. The production thresholds were equivalent for both codes. Results were analyzed based upon a voxel by voxel dose difference between the two codes and also using a volumetric gamma analysis. Results: Comparisons were drawn from central axis depth doses, cross beam profiles, and isodose contours. Both in the presence and absence of a 1.5 T static magnetic field the relative differences in doses scored along the beam central axis were less than 1% for the homogeneous water phantom and all results matched within a maximum of ±2% for heterogeneous phantoms. Volumetric gamma analysis indicated that more than 99% of the examined volume passed gamma criteria of 2%—2 mm (dose difference and distance to agreement, respectively). These criteria were chosen because the minimum primary statistical uncertainty in dose scoring voxels was 0.5%. The presence of the magnetic field affects the dose at the interface depending upon the density of the material on either sides of the interface. This effect varies with the field size. For example, at the water-lung interface a 33.94% increase in dose was observed (relative to the Dmax), by both GPUMCD and geant4 for the field size of 2 × 2 cm2 (compared to no B-field case), which increased to 47.83% for the field size of 5 × 5 cm2 in the presence of the magnetic field. Similarly, at the lung-water interface, the dose decreased by 19.21% (relative to Dmax) for a field size of 2 × 2 cm2 and by 30.01% for 5 × 5 cm2field size. For more complex combinations of materials the dose deposition also becomes more complex. Conclusions: The GPUMCD algorithm showed good agreement against geant4 both in the presence and absence of a 1.5 T external magnetic field. The application of 1.5 T magnetic field significantly alters the dose at the interfaces by either increasing or decreasing the dose depending upon the density of the material on either side of the interfaces.

Journal ArticleDOI
TL;DR: Strong results are obtained using transfer learning to characterize ultrasound breast cancer images, which allows us to directly classify a small dataset of lesions in a computationally inexpensive fashion without any manual input.
Abstract: Purpose: To assess the performance of using transferred features from pre-trained deep convolutional networks (CNNs) in the task of classifying cancer in breast ultrasound images, and to compare this method of transfer learning with previous methods involving human-designed features. Methods: A breast ultrasound dataset consisting of 1125 cases and 2393 regions of interest (ROIs) was used. Each ROI was labeled as cystic, benign, or malignant. Features were extracted from each ROI using pre-trained CNNs and used to train support vector machine (SVM) classifiers in the tasks of distinguishing non-malignant (benign+cystic) vs malignant lesions and benign vs malignant lesions. For a baseline comparison, classifiers were also trained on prior analytically-extracted tumor features. Five-fold cross-validation (by case) was conducted with the area under the receiver operating characteristic curve (AUC) as the performance metric. Results: Classifiers trained on CNN-extracted features were comparable to classifiers trained on human-designed features. In the non-malignant vs malignant task, both the SVM trained on CNN-extracted features and the SVM trained on human-designed features obtained an AUC of 0.90. In the task of determining benign vs malignant, the SVM trained on CNN-extracted features obtained an AUC of 0.88, compared to the AUC of 0.85 obtained by the SVM trained on human-designed features. Conclusion: We obtained strong results using transfer learning to characterize ultrasound breast cancer images. This method allows us to directly classify a small dataset of lesions in a computationally inexpensive fashion without any manual input. Modern deep learning methods in computer vision are contingent on large datasets and vast computational resources, which are often inaccessible for clinical applications. Consequently, we believe transfer learning methods will be important for computer-aided diagnosis schemes in order to utilize advancements in deep learning and computer vision without the associated costs. This work was partially funded by NIH grant U01 CA195564 and the University of Chicago Metcalf program. M.L.G. is a stockholder in R2/Hologic, co-founder and equity holder in Quantitative Insights, and receives royalties from Hologic, GE Medical Systems, MEDIAN Technologies, Riverain Medical, Mitsubishi, and Toshiba. K.D. received royalties from Hologic.

Journal ArticleDOI
TL;DR: It is demonstrated that proton therapy dose calculations on heterogeneous sCTs are in good agreement with plans generated with standard planning CT, and an MRI-only based RTP workflow is feasible in IMPT for brain tumors and prostate cancers.
Abstract: Purpose: Magnetic resonance imaging (MRI) is increasingly used for radiotherapy target delineation, image guidance, and treatment response monitoring. Recent studies have shown that an entire external x-ray radiotherapy treatment planning (RTP) workflow for brain tumor or prostate cancer patients based only on MRI reference images is feasible. This study aims to show that a MRI-only based RTP workflow is also feasible for proton beam therapy plans generated in MRI-based substitute computed tomography (sCT) images of the head and the pelvis. Methods: The sCTs were constructed for ten prostate cancer and ten brain tumor patients primarily by transforming the intensity values of in-phase MR images to Hounsfield units (HUs) with a dual model HU conversion technique to enable heterogeneous tissue representation. HU conversion models for the pelvis were adopted from previous studies, further extended in this study also for head MRI by generating anatomical site-specific conversion models (a new training data set of ten other brain patients). This study also evaluated two other types of simplified sCT: dual bulk density (for bone and water) and homogeneous (water only). For every clinical case, intensity modulated proton therapy (IMPT) plans robustly optimized in standard planning CTs were calculated in sCT for evaluation, and vice versa. Overall dose agreement was evaluated using dose–volume histogram parameters and 3D gamma criteria. Results: In heterogeneous sCTs, the mean absolute errors in HUs were 34 (soft tissues: 13, bones: 92) and 42 (soft tissues: 9, bones: 97) in the head and in the pelvis, respectively. The maximum absolute dose differences relative to CT in the brain tumor clinical target volume (CTV) were 1.4% for heterogeneous sCT, 1.8% for dual bulk sCT, and 8.9% for homogenous sCT. The corresponding maximum differences in the prostate CTV were 0.6%, 1.2%, and 3.6%, respectively. The percentages of dose points in the head and pelvis passing 1% and 1 mm gamma index criteria were over 91%, 85%, and 38% with heterogeneous, dual bulk, and homogeneous sCTs, respectively. There were no significant changes to gamma index pass rates for IMPT plans first optimized in CT and then calculated in heterogeneous sCT versus IMPT plans first optimized in heterogeneous sCT and then calculated on standard CT. Conclusions: This study demonstrates that proton therapy dose calculations on heterogeneous sCTs are in good agreement with plans generated with standard planning CT. An MRI-only based RTP workflow is feasible in IMPT for brain tumors and prostate cancers.

Journal ArticleDOI
TL;DR: An evaluation of five (semi)automatic methods within this framework shows that automatic per patient CVD risk categorization is feasible and CAC lesions at ambiguous locations such as the coronary ostia remain challenging, but their detection had limited impact on CVDrisk determination.
Abstract: Purpose: The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular disease (CVD) events. In clinical practice, CAC is manually identified and automatically quantified in cardiac CT using commercially available software. This is a tedious and time-consuming process in large-scale studies. Therefore, a number of automatic methods that require no interaction and semiautomatic methods that require very limited interaction for the identification of CAC in cardiac CT have been proposed. Thus far, a comparison of their performance has been lacking. The objective of this study was to perform an independent evaluation of (semi)automatic methods for CAC scoring in cardiac CT using a publicly available standardized framework. Methods: Cardiac CT exams of 72 patients distributed over four CVD risk categories were provided for (semi)automatic CAC scoring. Each exam consisted of a noncontrast-enhanced calcium scoring CT (CSCT) and a corresponding coronary CT angiography (CCTA) scan. The exams were acquired in four different hospitals using state-of-the-art equipment from four major CT scanner vendors. The data were divided into 32 training exams and 40 test exams. A reference standard for CAC in CSCT was defined by consensus of two experts following a clinical protocol. The framework organizers evaluated the performance of (semi)automatic methods on test CSCT scans, per lesion, artery, and patient. Results: Five (semi)automatic methods were evaluated. Four methods used both CSCT and CCTA to identify CAC, and one method used only CSCT. The evaluated methods correctly detected between 52% and 94% of CAC lesions with positive predictive values between 65% and 96%. Lesions in distal coronary arteries were most commonly missed and aortic calcifications close to the coronary ostia were the most common false positive errors. The majority (between 88% and 98%) of correctly identified CAC lesions were assigned to the correct artery. Linearly weighted Cohens kappa for patient CVD risk categorization by the evaluated methods ranged from 0.80 to 1.00. Conclusions: A publicly available standardized framework for the evaluation of (semi)automatic methods for CAC identification in cardiac CT is described. An evaluation of five (semi)automatic methods within this framework shows that automatic per patient CVD risk categorization is feasible. CAC lesions at ambiguous locations such as the coronary ostia remain challenging, but their detection had limited impact on CVD risk determination.

Journal ArticleDOI
TL;DR: The proton stopping power estimation accuracy of the proposed linear, separable BVM model is comparable to or better than that of the nonseparable tPFM models proposed by other groups.
Abstract: Purpose: To evaluate the accuracy and robustness of a simple, linear, separable, two-parameter model (basis vector model, BVM) in mapping protonstopping powersvia dual energy computed tomography (DECT) imaging. Methods: The BVM assumes that photon cross sections (attenuation coefficients) of unknown materials are linear combinations of the corresponding radiological quantities of dissimilar basis substances (i.e., polystyrene, CaCl2 aqueous solution, and water). The authors have extended this approach to the estimation of electron density and mean excitation energy, which are required parameters for computing protonstopping powers via the Bethe–Bloch equation. The authors compared the stopping power estimation accuracy of the BVM with that of a nonlinear, nonseparable photon cross section Torikoshi parametric fit model (VCU tPFM) as implemented by the authors and by Yang et al. [“Theoretical variance analysis of single- and dual-energy computed tomography methods for calculating protonstopping power ratios of biological tissues,” Phys. Med. Biol. 55, 1343–1362 (2010)]. Using an idealized monoenergetic DECT imaging model, proton ranges estimated by the BVM, VCU tPFM, and Yang tPFM were compared to International Commission on Radiation Units and Measurements (ICRU) published reference values. The robustness of the stopping power prediction accuracy of tissue composition variations was assessed for both of the BVM and VCU tPFM. The sensitivity of accuracy to CTimageuncertainty was also evaluated. Results: Based on the authors’ idealized, error-free DECT imaging model, the root-mean-square error of BVM protonstopping power estimation for 175 MeV protons relative to ICRU reference values for 34 ICRU standard tissues is 0.20%, compared to 0.23% and 0.68% for the Yang and VCU tPFM models, respectively. The range estimation errors were less than 1 mm for the BVM and Yang tPFM models, respectively. The BVM estimation accuracy is not dependent on tissue type and proton energy range. The BVM is slightly more vulnerable to CTimage intensity uncertainties than the tPFM models. Both the BVM and tPFM prediction accuracies were robust to uncertainties of tissue composition and independent of the choice of reference values. This reported accuracy does not include the impacts of I-value uncertainties and imaging artifacts and may not be achievable on current clinical CT scanners. Conclusions: The protonstopping power estimation accuracy of the proposed linear, separable BVM model is comparable to or better than that of the nonseparable tPFM models proposed by other groups. In contrast to the tPFM, the BVM does not require an iterative solving for effective atomic number and electron density at every voxel; this improves the computational efficiency of DECT imaging when iterative, model-based image reconstruction algorithms are used to minimize noise and systematic imaging artifacts of CTimages.

Journal ArticleDOI
TL;DR: The authors showed that a patch-based method based on affine registrations and T1-weighted MRI could generate accurate pCTs of the pelvis and showed a significantly better performance than the baseline water pCT in almost all metrics.
Abstract: Purpose: In radiotherapy based only on magnetic resonance imaging (MRI), knowledge about tissue electron densities must be derived from the MRI. This can be achieved by converting the MRI scan to the so-called pseudo-computed tomography (pCT). An obstacle is that the voxel intensities in conventional MRI scans are not uniquely related to electron density. The authors previously demonstrated that a patch-based method could produce accurate pCTs of the brain using conventional T 1-weighted MRI scans. The method was driven mainly by local patch similarities and relied on simple affine registrations between an atlas database of the co-registered MRI/CT scan pairs and the MRI scan to be converted. In this study, the authors investigate the applicability of the patch-based approach in the pelvis. This region is challenging for a method based on local similarities due to the greater inter-patient variation. The authors benchmark the method against a baseline pCT strategy where all voxels inside the body contour are assigned a water-equivalent bulk density. Furthermore, the authors implement a parallelized approximate patch search strategy to speed up the pCT generation time to a more clinically relevant level. Methods: The data consisted of CT and T 1-weighted MRI scans of 10 prostate patients. pCTs were generated using an approximate patch search algorithm in a leave-one-out fashion and compared with the CT using frequently described metrics such as the voxel-wise mean absolute error (MAEvox) and the deviation in water-equivalent path lengths. Furthermore, the dosimetric accuracy was tested for a volumetric modulated arc therapy plan using dose–volume histogram (DVH) point deviations and γ-index analysis. Results: The patch-based approach had an average MAEvox of 54 HU; median deviations of less than 0.4% in relevant DVH points and a γ-index pass rate of 0.97 using a 1%/1 mm criterion. The patch-based approach showed a significantly better performance than the baseline water pCT in almost all metrics. The approximate patch search strategy was 70x faster than a brute-force search, with an average prediction time of 20.8 min. Conclusions: The authors showed that a patch-based method based on affine registrations and T 1-weighted MRI could generate accurate pCTs of the pelvis. The main source of differences between pCT and CT was positional changes of air pockets and body outline.

Journal ArticleDOI
TL;DR: The authors have developed a CAD system with all its ingredients being optimized for a better detection of WMHs of all size, which shows performance close to an independent reader.
Abstract: Purpose: White matter hyperintensities (WMH) are seen on FLAIR-MRI in several neurological disorders, including multiple sclerosis, dementia, Parkinsonism, stroke and cerebral small vessel disease (SVD). WMHs are often used as biomarkers for prognosis or disease progression in these diseases, and additionally longitudinal quantification of WMHs is used to evaluate therapeutic strategies. Human readers show considerable disagreement and inconsistency on detection of small lesions. A multitude of automated detection algorithms for WMHs exists, but since most of the current automated approaches are tuned to optimize segmentation performance according to Jaccard or Dice scores, smaller WMHs often go undetected in these approaches. In this paper, the authors propose a method to accurately detect all WMHs, large as well as small. Methods: A two-stage learning approach was used to discriminate WMHs from normal brain tissue. Since small and larger WMHs have quite a different appearance, the authors have trained two probabilistic classifiers: one for the small WMHs (⩽3 mm effective diameter) and one for the larger WMHs (>3 mm in-plane effective diameter). For each size-specific classifier, an Adaboost is trained for five iterations, with random forests as the basic classifier. The feature sets consist of 22 features including intensities, location information, blob detectors, and second order derivatives. The outcomes of the two first-stage classifiers were combined into a single WMH likelihood by a second-stage classifier. Their method was trained and evaluated on a dataset with MRI scans of 362 SVD patients (312 subjects for training and validation annotated by one and 50 for testing annotated by two trained raters). To analyze performance on the separate test set, the authors performed a free-response receiving operating characteristic (FROC) analysis, instead of using segmentation based methods that tend to ignore the contribution of small WMHs. Results: Experimental results based on FROC analysis demonstrated a close performance of the proposed computer aided detection (CAD) system to human readers. While an independent reader had 0.78 sensitivity with 28 false positives per volume on average, their proposed CAD system reaches a sensitivity of 0.73 with the same number of false positives. Conclusions: The authors have developed a CAD system with all its ingredients being optimized for a better detection of WMHs of all size, which shows performance close to an independent reader.

Journal ArticleDOI
TL;DR: Compared to FFDM, c-view offers a better depiction of objects of certain size and contrast, but provides poorer overall resolution and noise properties, and the utilization of c- view images in the clinical setting requires careful consideration.
Abstract: Purpose: The FDA approved the use of digital breast tomosynthesis (DBT) in 2011 as an adjunct to 2D full field digital mammography (FFDM) with the constraint that all DBT acquisitions must be paired with a 2D image to assure adequate interpretative information is provided. Recently manufacturers have developed methods to provide a synthesized 2D image generated from the DBT data with the hope of sparing patients the radiation exposure from the FFDM acquisition. While this much needed alternative effectively reduces the total radiation burden, differences in image quality must also be considered. The goal of this study was to compare the intrinsic image quality of synthesized 2D c-view and 2D FFDM images in terms of resolution, contrast, and noise. Methods: Two phantoms were utilized in this study: the American College of Radiology mammography accreditation phantom (ACR phantom) and a novel 3D printed anthropomorphic breast phantom. Both phantoms were imaged using a Hologic Selenia Dimensions 3D system. Analysis of the ACR phantom includes both visual inspection and objective automated analysis using in-house software.Analysis of the 3D anthropomorphic phantom includes visual assessment of resolution and Fourier analysis of the noise. Results: Using ACR-defined scoring criteria for the ACR phantom, the FFDM images scored statistically higher than c-view according to both the average observer and automated scores. In addition, between 50% and 70% of c-viewimages failed to meet the nominal minimum ACR accreditation requirements—primarily due to fiber breaks. Softwareanalysis demonstrated that c-view provided enhanced visualization of medium and large microcalcification objects; however, the benefits diminished for smaller high contrast objects and all low contrast objects. Visual analysis of the anthropomorphic phantom showed a measureable loss of resolution in the c-viewimage (11 lp/mm FFDM, 5 lp/mm c-view) and loss in detection of small microcalcification objects. Spectral analysis of the anthropomorphic phantom showed higher total noise magnitude in the FFDM image compared with c-view. Whereas the FFDM image contained approximately white noise texture, the c-viewimage exhibited marked noise reduction at midfrequency and high frequency with far less noise suppression at low frequencies resulting in a mottled noise appearance. Conclusions: Their analysis demonstrates many instances where the c-viewimage quality differs from FFDM. Compared to FFDM, c-view offers a better depiction of objects of certain size and contrast, but provides poorer overall resolution and noise properties. Based on these findings, the utilization of c-viewimages in the clinical setting requires careful consideration, especially if considering the discontinuation of FFDM imaging. Not explicitly explored in this study is how the combination of DBT + c-view performs relative to DBT + FFDM or FFDM alone.

Journal ArticleDOI
TL;DR: DECT-based proton treatment planning in a commercial treatment planning system was successfully demonstrated for the first time and DECT is an attractive imaging modality for proton therapy treatment planning owing to its ability to characterize density and chemical composition of patient tissues.
Abstract: Purpose: The accuracy of proton dose calculation is dependent on the ability to correctly characterize patient tissues with medical imaging. The most common method is to correlate computed tomography (CT) numbers obtained via single-energy CT (SECT) with proton stopping power ratio (SPR). CT numbers, however, cannot discriminate between a change in mass density and change in chemical composition of patient tissues. This limitation can have consequences on SPR calibration accuracy. Dual-energy CT (DECT) is receiving increasing interest as an alternative imaging modality for proton therapy treatment planning due to its ability to discriminate between changes in patient density and chemical composition. In the current work we use a phantom of known composition to demonstrate the dosimetric advantages of proton therapy treatment planning with DECT over SECT. Methods: A phantom of known composition was scanned with a clinical SECT radiotherapy CT-simulator. The phantom was rescanned at a lower X-ray tube potential to generate a complimentary DECT image set. A set of reference materials similar in composition to the phantom was used to perform a stoichiometric calibration of SECT CT number to proton SPRs. The same set of reference materials was used to perform a DECT stoichiometric calibration based on effective atomic number. The known composition of the phantom was used to assess the accuracy of SPR calibration with SECT and DECT. Intensity modulated proton therapy (IMPT) treatment plans were generated with the SECT and DECT image sets to assess the dosimetric effect of the imaging modality. Isodose difference maps and root mean square (RMS) error calculations were used to assess dose calculation accuracy. Results: SPR calculation accuracy was found to be superior, on average, with DECT relative to SECT. Maximum errors of 12.8% and 2.2% were found for SECT and DECT, respectively. Qualitative examination of dose difference maps clearly showed the dosimetric advantages of DECT imaging, compared to SECT imaging for IMPT dose calculation for the case investigated. Quantitatively, the maximum dose calculation error in the SECT plan was 7.8%, compared to a value of 1.4% in the DECT plan. When considering the high dose target region, the root mean square (RMS) error in dose calculation was 2.1% and 0.4% for SECT and DECT, respectively. Conclusions: DECT-based proton treatment planning in a commercial treatment planning system was successfully demonstrated for the first time. DECT is an attractive imaging modality for proton therapy treatment planning owing to its ability to characterize density and chemical composition of patient tissues. SECT and DECT scans of a phantom of known composition have been used to demonstrate the dosimetric advantages obtainable in proton therapy treatment planning with DECT over the current approach based on SECT.