scispace - formally typeset
Search or ask a question

Showing papers on "Imaging phantom published in 2020"


Journal ArticleDOI
TL;DR: On DLR images, the image noise was lower, and high-contrast spatial resolution and task-based detectability were better than on images reconstructed with other state-of-the art techniques.

137 citations


Journal ArticleDOI
TL;DR: The deep-learning based CT reconstruction demonstrated a strong noise magnitude reduction compared to FBP while maintaining similar noise texture and high-contrast spatial resolution, however, the algorithm resulted in images with a locally non-stationary noise in lung textured backgrounds and had somewhat degraded low-cont contrast spatial resolution similar to what has been observed in currently available iterative reconstruction techniques.
Abstract: PURPOSE To characterize the noise and spatial resolution properties of a commercially available deep learning-based computed tomography (CT) reconstruction algorithm. METHODS Two phantom experiments were performed. The first used a multisized image quality phantom (Mercury v3.0, Duke University) imaged at five radiation dose levels (CTDIvol : 0.9, 1.2, 3.6, 7.0, and 22.3 mGy) with a fixed tube current technique on a commercial CT scanner (GE Revolution CT). Images were reconstructed with conventional (FBP), iterative (GE ASiR-V), and deep learning-based (GE True Fidelity) reconstruction algorithms. Noise power spectrum (NPS), high-contrast (air-polyethylene interface), and intermediate-contrast (water-polyethylene interface) task transfer functions (TTF) were measured for each dose level and phantom size and summarized in terms of average noise frequency (fav ) and frequency at which the TTF was reduced to 50% (f50% ), respectively. The second experiment used a custom phantom with low-contrast rods and lung texture sections for the assessment of low-contrast TTF and noise spatial distribution. The phantom was imaged at five dose levels (CTDIvol : 1.0, 2.1, 3.0, 6.0, and 10.0 mGy) with 20 repeated scans at each dose, and images reconstructed with the same reconstruction algorithms. The local noise stationarity was assessed by generating spatial noise maps from the ensemble of repeated images and computing a noise inhomogeneity index, η , following AAPM TG233 methods. All measurements were compared among the algorithms. RESULTS Compared to FBP, noise magnitude was reduced on average (± one standard deviation) by 74 ± 6% and 68 ± 4% for ASiR-V (at "100%" setting) and True Fidelity (at "High" setting), respectively. The noise texture from ASiR-V had substantially lower noise frequency content with 55 ± 4% lower NPS fav compared to FBP while True Fidelity had only marginally different noise frequency content with 9 ± 5% lower NPS fav compared to FBP. Both ASiR-V and True Fidelity demonstrated locally nonstationary noise in a lung texture background at all radiation dose levels, with higher noise near high-contrast edges of vessels and lower noise in uniform regions. At the 1.0 mGy dose level η values were 314% and 271% higher in ASiR-V and True Fidelity compared to FBP, respectively. High-contrast spatial resolution was similar between all algorithms for all dose levels and phantom sizes (<3% difference in TTF f50% ). Compared to FBP, low-contrast spatial resolution was lower for ASiR-V and True Fidelity with a reduction of TTF f50% of up to 42% and 36%, respectively. CONCLUSIONS The deep learning-based CT reconstruction demonstrated a strong noise magnitude reduction compared to FBP while maintaining similar noise texture and high-contrast spatial resolution. However, the algorithm resulted in images with a locally nonstationary noise in lung textured backgrounds and had somewhat degraded low-contrast spatial resolution similar to what has been observed in currently available iterative reconstruction techniques.

99 citations


Journal ArticleDOI
TL;DR: Overall, the DNNs successfully translated feature representations learned from simulated data to phantom and in vivo data, which is promising for this novel approach to simultaneous ultrasound image formation and segmentation.
Abstract: Single plane wave transmissions are promising for automated imaging tasks requiring high ultrasound frame rates over an extended field of view. However, a single plane wave insonification typically produces suboptimal image quality. To address this limitation, we are exploring the use of deep neural networks (DNNs) as an alternative to delay-and-sum (DAS) beamforming. The objectives of this work are to obtain information directly from raw channel data and to simultaneously generate both a segmentation map for automated ultrasound tasks and a corresponding ultrasound B-mode image for interpretable supervision of the automation. We focus on visualizing and segmenting anechoic targets surrounded by tissue and ignoring or deemphasizing less important surrounding structures. DNNs trained with Field II simulations were tested with simulated, experimental phantom, and in vivo data sets that were not included during training. With unfocused input channel data (i.e., prior to the application of receive time delays), simulated, experimental phantom, and in vivo test data sets achieved mean ± standard deviation Dice similarity coefficients of 0.92 ± 0.13, 0.92 ± 0.03, and 0.77 ± 0.07, respectively, and generalized contrast-to-noise ratios (gCNRs) of 0.95 ± 0.08, 0.93 ± 0.08, and 0.75 ± 0.14, respectively. With subaperture beamformed channel data and a modification to the input layer of the DNN architecture to accept these data, the fidelity of image reconstruction increased (e.g., mean gCNR of multiple acquisitions of two in vivo breast cysts ranged 0.89–0.96), but DNN display frame rates were reduced from 395 to 287 Hz. Overall, the DNNs successfully translated feature representations learned from simulated data to phantom and in vivo data, which is promising for this novel approach to simultaneous ultrasound image formation and segmentation.

86 citations


Journal ArticleDOI
TL;DR: In this paper, a flexible high-permittivity dielectric substrate is developed using silicon-based poly-di-methyl-siloxane (PDMS) matrix and microscale of aluminium oxide (Al2O3) and graphite (G) powders.
Abstract: An approach toward designing and building of a compact, low-profile, wideband, unidirectional, and conformal imaging antenna for electromagnetic (EM) head imaging systems is presented. The approach includes the realization of a custom-made flexible high-permittivity dielectric substrate to achieve a compact sensing antenna. The developed composite substrate is built using silicon-based poly-di-methyl-siloxane (PDMS) matrix and microscale of aluminium oxide (Al2O3) and graphite (G) powders. Al2O3 and G powders are used as fillers with different weight-ratio to manipulate and control the dielectric properties of the substrate for attaining better matched with the human head and reducing antenna’s physical size while keeping the PDMS flexibility feature. Using the custom-made substrate, a compact, wideband, and unidirectional on-body matched antenna for wearable EM head imaging system is realized. The antenna is configured as a multi-slot planar structure with four shorting pins, working as electric and magnetic dipoles at different frequency bands. The measured reflection coefficient (S11) shows an operating frequency band of 1–4.3 GHz. The time-average power density and the amplitude of the received signal inside the MRI-based realistic head phantom demonstrate a unidirectional propagation and high-fidelity factor (FF) of more than 90%. An array of 13 antennas are fabricated and tested on a realistic 3-D head phantom to verify the imaging capability of the proposed antenna. The reconstructed images of different targets inside the head phantom demonstrate the possibility of utilizing the conformal antenna arrays to detect and locate abnormality inside the brain using multistatic delay-multiply-and-sum beamforming algorithm.

80 citations


Journal ArticleDOI
TL;DR: To quantitatively demonstrate radiation dose reduction for sinus and temporal bone examinations using high-resolution photon-counting detector (PCD) computed tomography (CT) with an additional tin (Sn) filter, a multienergy CT phantom, an anthropomorphic head phantom, and a cadaver head were scanned.
Abstract: ObjectiveThe aim of this study was to quantitatively demonstrate radiation dose reduction for sinus and temporal bone examinations using high-resolution photon-counting detector (PCD) computed tomography (CT) with an additional tin (Sn) filter.Materials and MethodsA multienergy CT phantom, an anthro

74 citations


Journal ArticleDOI
03 May 2020-Sensors
TL;DR: The open issue of monitoring patients after stroke onset is addressed here in order to provide clinicians with a tool to control the effectiveness of administered therapies during the follow-up period and a novel prototype is presented and characterized.
Abstract: This work focuses on brain stroke imaging via microwave technology. In particular, the open issue of monitoring patients after stroke onset is addressed here in order to provide clinicians with a tool to control the effectiveness of administered therapies during the follow-up period. In this paper, a novel prototype is presented and characterized. The device is based on a low-complexity architecture which makes use of a minimum number of properly positioned and designed antennas placed on a helmet. It exploits a differential imaging approach and provides 3D images of the stroke. Preliminary experiments involving a 3D phantom filled with brain tissue-mimicking liquid confirm the potential of the technology in imaging a spherical target mimicking a stroke of a radius equal to 1.25 cm.

70 citations


Journal ArticleDOI
TL;DR: This review investigates the specifications that are typically being used in development of the latest TMMs and investigates the imaging modalities that have been investigated focus around CT, mammography, SPECT, PET, MRI and ultrasound.
Abstract: Tissue mimicking materials (TMMs), typically contained within phantoms, have been used for many decades in both imaging and therapeutic applications. This review investigates the specifications that are typically being used in development of the latest TMMs. The imaging modalities that have been investigated focus around CT, mammography, SPECT, PET, MRI and ultrasound. Therapeutic applications discussed within the review include radiotherapy, thermal therapy and surgical applications. A number of modalities were not reviewed including optical spectroscopy, optical imaging and planar x-rays. The emergence of image guided interventions and multimodality imaging have placed an increasing demand on the number of specifications on the latest TMMs. Material specification standards are available in some imaging areas such as ultrasound. It is recommended that this should be replicated for other imaging and therapeutic modalities. Materials used within phantoms have been reviewed for a series of imaging and therapeutic applications with the potential to become a testbed for cross-fertilization of materials across modalities. Deformation, texture, multimodality imaging and perfusion are common themes that are currently under development.

64 citations


Journal ArticleDOI
TL;DR: A deep neural network-based model with loss function being scaled root-mean-squared error was proposed for super-resolution, denoising, as well as BW enhancement of the PA signals collected at the boundary of the domain and is shown to improve the collected boundary data, in turn, providing superior quality reconstructed PA image.
Abstract: Photoacoustic tomography (PAT) is a noninvasive imaging modality combining the benefits of optical contrast at ultrasonic resolution. Analytical reconstruction algorithms for photoacoustic (PA) signals require a large number of data points for accurate image reconstruction. However, in practical scenarios, data are collected using the limited number of transducers along with data being often corrupted with noise resulting in only qualitative images. Furthermore, the collected boundary data are band-limited due to limited bandwidth (BW) of the transducer, making the PA imaging with limited data being qualitative. In this work, a deep neural network-based model with loss function being scaled root-mean-squared error was proposed for super-resolution, denoising, as well as BW enhancement of the PA signals collected at the boundary of the domain. The proposed network has been compared with traditional as well as other popular deep-learning methods in numerical as well as experimental cases and is shown to improve the collected boundary data, in turn, providing superior quality reconstructed PA image. The improvement obtained in the Pearson correlation, structural similarity index metric, and root-mean-square error was as high as 35.62%, 33.81%, and 41.07%, respectively, for phantom cases and signal-to-noise ratio improvement in the reconstructed PA images was as high as 11.65 dB for in vivo cases compared with reconstructed image obtained using original limited BW data. Code is available at https://sites.google.com/site/sercmig/home/dnnpat .

64 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a Computational Miniature Mesoscope (CM2) that enables single-shot 3D imaging across an 8 mm by 7 mm field of view and 2.5mm DOF, achieving better than 200-μm axial resolution.
Abstract: Fluorescence microscopes are indispensable to biology and neuroscience. The need for recording in freely behaving animals has further driven the development in miniaturized microscopes (miniscopes). However, conventional microscopes/miniscopes are inherently constrained by their limited space-bandwidth product, shallow depth of field (DOF), and inability to resolve three-dimensional (3D) distributed emitters. Here, we present a Computational Miniature Mesoscope (CM2) that overcomes these bottlenecks and enables single-shot 3D imaging across an 8 mm by 7 mm field of view and 2.5-mm DOF, achieving 7-μm lateral resolution and better than 200-μm axial resolution. The CM2 features a compact lightweight design that integrates a microlens array for imaging and a light-emitting diode array for excitation. Its expanded imaging capability is enabled by computational imaging that augments the optics by algorithms. We experimentally validate the mesoscopic imaging capability on 3D fluorescent samples. We further quantify the effects of scattering and background fluorescence on phantom experiments.

61 citations


Journal ArticleDOI
TL;DR: The generalized contrast-to-noise ratio (gCNR) is a relatively new image quality metric designed to assess the probability of lesion detectability in ultrasound images and has promising potential to provide additional insight, particularly when designing new beamformers and image formation techniques.
Abstract: The generalized contrast-to-noise ratio (gCNR) is a relatively new image quality metric designed to assess the probability of lesion detectability in ultrasound images. Although gCNR was initially demonstrated with ultrasound images, the metric is theoretically applicable to multiple types of medical images. In this paper, the applicability of gCNR to photoacoustic images is investigated. The gCNR was computed for both simulated and experimental photoacoustic images generated by amplitude-based (i.e., delay-and-sum) and coherence-based (i.e., short-lag spatial coherence) beamformers. These gCNR measurements were compared to three more traditional image quality metrics (i.e., contrast, contrast-to-noise ratio, and signal-to-noise ratio) applied to the same datasets. An increase in qualitative target visibility generally corresponded with increased gCNR. In addition, gCNR magnitude was more directly related to the separability of photoacoustic signals from their background, which degraded with the presence of limited bandwidth artifacts and increased levels of channel noise. At high gCNR values (i.e., 0.95-1), contrast, contrast-to-noise ratio, and signal-to-noise ratio varied by up to 23.7-56.2 dB, 2.0-3.4, and 26.5-7.6×1020, respectively, for simulated, experimental phantom, and in vivo data. Therefore, these traditional metrics can experience large variations when a target is fully detectable, and additional increases in these values would have no impact on photoacoustic target detectability. In addition, gCNR is robust to changes in traditional metrics introduced by applying a minimum threshold to image amplitudes. In tandem with other photoacoustic image quality metrics and with a defined range of 0 to 1, gCNR has promising potential to provide additional insight, particularly when designing new beamformers and image formation techniques and when reporting quantitative performance without an opportunity to qualitatively assess corresponding images (e.g., in text-only abstracts).

61 citations


Journal ArticleDOI
TL;DR: In this article, the phase shift is detected between pairs of Tx and Rx angles that are centred around a set of common mid-angles, and an additional phase shift induced by the offset of the reconstructed position of echoes.

Journal ArticleDOI
TL;DR: In this article, the authors used computational fluid dynamics simulations to generate fluid flow simulations and represent them as synthetic 4D flow MRI data and trained a neural network to produce super-resolution 4D-flow phase images with upsample factor of 2.6 to 5.8%.
Abstract: 4D-flow magnetic resonance imaging (MRI) is an emerging imaging technique where spatiotemporal 3D blood velocity can be captured with full volumetric coverage in a single non-invasive examination. This enables qualitative and quantitative analysis of hemodynamic flow parameters of the heart and great vessels. An increase in the image resolution would provide more accuracy and allow better assessment of the blood flow, especially for patients with abnormal flows. However, this must be balanced with increasing imaging time. The recent success of deep learning in generating super resolution images shows promise for implementation in medical images. We utilized computational fluid dynamics simulations to generate fluid flow simulations and represent them as synthetic 4D flow MRI data. We built our training dataset to mimic actual 4D flow MRI data with its corresponding noise distribution. Our novel 4DFlowNet network was trained on this synthetic 4D flow data and was capable in producing noise-free super resolution 4D flow phase images with upsample factor of 2. We also tested the 4DFlowNet in actual 4D flow MR images of a phantom and normal volunteer data, and demonstrated comparable results with the actual flow rate measurements giving an absolute relative error of 0.6 to 5.8% and 1.1 to 3.8% in the phantom data and normal volunteer data, respectively.

Journal ArticleDOI
TL;DR: The VERITON CzT camera has a superior sensitivity, higher energy resolution and better image contrast than the conventional SPECT camera, whereas spatial resolution remains similar.
Abstract: Evaluate the physical performance of the VERITON CzT camera (Spectrum Dynamics, Caesarea, Israel) that benefits from new detection architecture enabling whole-body imaging compared to that of a conventional dual-head Anger camera. Different line sources and phantom measurements were performed on each system to evaluate spatial resolution, sensitivity, energy resolution and image quality with acquisition and reconstruction parameters similar to those used in clinical routine. Extrinsic resolution was assessed using 99mTc capillary sources placed successively in air, in a head and in a body phantom filled with background activity. Spectral acquisitions for various radioelements used in nuclear medicine (99mTc, 123I, 201Tl, 111In) were performed to evaluate energy resolution by computing the FWHM of the measured photoelectric peak. Tomographic sensitivity was calculated by recording the total number of counts detected during tomographic acquisition for a set of source geometries representative of different clinical situations. Sensitivity was also evaluated in focus mode for the CzT camera, which consisted of forcing detectors to collect data in a reduced field-of-view. Image quality was assessed with a Jaszczak phantom filled with 350 MBq of 99mTc and scanned on each system with 30-,20-,10- and 5-min acquisition times. Extrinsic and tomographic resolution in the brain and body phantoms at the centre of the FOV was estimated at 3.55, 7.72 and 6.66 mm for the CzT system and 2.47, 7.75 and 7.72 mm for the conventional system, respectively. The energy resolution measured at 140 keV was 5.46% versus 9.21% for the Anger camera and was higher in a same manner for all energy peaks tested. Tomographic sensitivity for a point source in air was estimated at 236 counts·s−1·MBq−1 and increased to 1159 counts·s−1·MBq−1 using focus mode, which was 1.6 times and 8 times greater than the sensitivity measured on the scintillation camera (144 counts·s−1·MBq−1). Head and body measurements also showed higher sensitivity for the CzT camera in particular with focus mode. The Jaszczak phantom showed high image contrast uniformity and a high signal-to-noise ratio on the CzT system, even when decreasing acquisition time by 6-fold. Representative clinical cases are shown to illustrate these results. The CzT camera has a superior sensitivity, higher energy resolution and better image contrast than the conventional SPECT camera, whereas spatial resolution remains similar. Introduction of this new technology may change current practices in nuclear medicine such as decreasing acquisition time and activity injected to patient.

Journal ArticleDOI
TL;DR: Improved overall performance of the Vision provides a factor of 4–6 reduction in imaging time (or injected dose) over the mCT Flow when using the ALROC metric for lesions at least 9.89 mm in diameter.
Abstract: The latest digital whole-body PET scanners provide a combination of higher sensitivity and improved spatial and timing resolution. We performed a lesion detectability study on two generations of Biograph PET/CT scanners, the mCT Flow and the Vision, to study the impact of improved physical performance on clinical performance. Our hypothesis was that the improved performance of the Vision would result in improved lesion detectability, allowing shorter imaging times or, equivalently, a lower injected dose. Methods: Data were acquired with the Society of Nuclear Medicine and Molecular Imaging Clinical Trials Network torso phantom combined with a 20-cm-diameter cylindrical phantom. Spherical lesions were emulated by acquiring sphere-in-air data and combining them with the phantom data to generate combined datasets with embedded lesions of known contrast. Two sphere sizes and uptakes were used: 9.89-mm-diameter spheres with 6:1 (lung) and 3:1 (cylinder) local activity concentration uptakes and 4.95-mm-diameter spheres with 9.6:1 (lung) and 4.5:1 (cylinder) local activity concentration uptakes. Standard image reconstruction was performed: an ordinary Poisson ordered-subsets expectation maximization algorithm with point-spread function and time-of-flight modeling and postreconstruction smoothing with a 5-mm gaussian filter. The Vision images were also generated without any postreconstruction smoothing. Generalized scan statistics methodology was used to estimate the area under the localized receiver-operating-characteristic curve (ALROC). Results: The higher sensitivity and improved time-of-flight performance of the Vision leads to reduced contrast in the background noise nodule distribution. Measured lesion contrast is also higher on the Vision because of its improved spatial resolution. Hence, the ALROC is noticeably higher for the Vision than for the mCT Flow. Conclusion: Improved overall performance of the Vision provides a factor of 4-6 reduction in imaging time (or injected dose) over the mCT Flow when using the ALROC metric for lesions at least 9.89 mm in diameter. Smaller lesions are barely detected in the mCT Flow, leading to even higher ALROC gains with the Vision. The improved spatial resolution of the Vision also leads to a higher measured contrast that is closer to the real uptake, implying improved quantification. Postreconstruction smoothing, however, reduces this improvement in measured contrast, thereby reducing the ALROC for small, high-uptake lesions.

Journal ArticleDOI
TL;DR: Experiments on an advanced breast phantom, designed based on a MRI of a real patient, fabricated using 3D printing technology, and filled with liquids that emulate normal and cancerous tissues demonstrate the suitability of the UWB-CSAR method for breast tumor imaging.
Abstract: This paper explores the competency of the time domain ultra-wideband (UWB)-circular synthetic aperture radar (CSAR) to image the breast and detect tumors. The image reconstruction is performed using a time domain global back projection technique adapted to the circular trajectory data acquisition. This paper also proposes a sectional image reconstruction method to compensate for the group velocity changes in different layers of a multilayer medium. Experiments on an advanced breast phantom examines the suitability of this technique for breast tumor imaging. The advanced breast phantom is designed based on a MRI of a real patient, fabricated using 3D printing technology, and filled with liquids that emulate normal and cancerous tissues. The measurement results, compared with MRI imaging of the phantom, demonstrate the suitability of the UWB-CSAR method for breast tumor imaging. This method can be a tool for early diagnosis as well as for treatment monitoring during chemotherapy or radiotherapy.

Journal ArticleDOI
TL;DR: FBW discretization and a sampling to isotropic voxels enhances the benefits of EARL-compliant reconstructions, which harmonize a wide range of radiomic features.
Abstract: The sensitivity of radiomic features to several confounding factors, such as reconstruction settings, makes clinical use challenging. To investigate the impact of harmonized image reconstructions on feature consistency, a multicenter phantom study was performed using 3-dimensionally printed phantom inserts reflecting realistic tumor shapes and heterogeneity uptakes. Methods: Tumors extracted from real PET/CT scans of patients with non-small cell lung cancer served as model for three 3-dimensionally printed inserts. Different heterogeneity pattern were realized by printing separate compartments that could be filled with different activity solutions. The inserts were placed in the National Electrical Manufacturers Association image-quality phantom and scanned various times. First, a list-mode scan was acquired and 5 statistically equal replicates were reconstructed. Second, the phantom was scanned 4 times on the same scanner. Third, the phantom was scanned on 6 PET/CT systems. All images were reconstructed using EANM Research Ltd. (EARL)-compliant and locally clinically preferred reconstructions. EARL-compliant reconstructions were performed without (EARL1) or with (EARL2) point-spread function. Images were analyzed with and without resampling to 2-mm cubic voxels. Images were discretized with a fixed bin width (FBW) of 0.25 and a fixed bin number (FBN) of 64. The intraclass correlation coefficient (ICC) of each scan setup was calculated and compared across reconstruction settings. An ICC above 0.75 was regarded as high. Results: The percentage of features yielding a high ICC was largest for the statistically equal replicates (70%-91% for FBN; 90%-96% for FBW discretization). For scans acquired on the same system, the percentage decreased, but most features still resulted in a high ICC (FBN, 52%-63%; FBW, 75%-85%). The percentage of features yielding a high ICC decreased more in the multicenter setting. In this case, the percentage of features yielding a high ICC was larger for images reconstructed with EARL-compliant reconstructions: for example, 40% for EARL1 and 60% for EARL2 versus 21% for the clinically preferred setting for FBW discretization. When discretized with FBW and resampled to isotropic voxels, this benefit was more pronounced. Conclusion: EARL-compliant reconstructions harmonize a wide range of radiomic features. FBW discretization and a sampling to isotropic voxels enhances the benefits of EARL-compliant reconstructions.

Journal ArticleDOI
TL;DR: It is demonstrated that ComBat harmonization is an effective means to harmonize radiomic features extracted from different imaging protocols to allow comparisons in large multi-institution datasets.
Abstract: This work seeks to evaluate the combatting batch effect (ComBat) harmonization algorithm's ability to reduce the variation in radiomic features arising from different imaging protocols and independently verify published results. The Gammex computed tomography (CT) electron density phantom and Quasar body phantom were imaged using 32 different chest imaging protocols. 107 radiomic features were extracted from 15 spatially varying spherical contours between 1.5 cm and 3 cm in each of the lung300 density, lung450 density, and wood inserts. The Kolmogorov-Smirnov test was used to determine significant differences in the distribution of the features and the concordance correlation coefficient (CCC) was used to measure the repeatability of the features from each protocol variation class (kVp, pitch, etc) before and after ComBat harmonization. P-values were corrected for multiple comparisons using the Benjamini-Hochberg-Yekutieli procedure. Finally, the ComBat algorithm was applied to human subject data using six different thorax imaging protocols with 135 patients. Spherical contours of un-irradiated lung (2 cm) and vertebral bone (1 cm) were used for radiomic feature extraction. ComBat harmonization reduced the percentage of features from significantly different distributions to 0%-2% or preserved 0% across all protocol variations for the lung300, lung450 and wood inserts. For the human subject data, ComBat harmonization reduced the percentage of significantly different features from 0%-59% for bone and 0%-19% for lung to 0% for both. This work verifies previously published results and demonstrates that ComBat harmonization is an effective means to harmonize radiomic features extracted from different imaging protocols to allow comparisons in large multi-institution datasets. Biological variation can be explicitly preserved by providing the ComBat algorithm with clinical or biological variables to protect. ComBat harmonization should be tested for its effect on predictive models.

Journal ArticleDOI
TL;DR: An image reconstruction approach named STDLR-SPiriT is proposed to explore the simultaneous two-directional low-rankness (STDLR) in the k-space data and to mine the data correlation from multiple receiver coils with the iterative self-consistent parallel imaging reconstruction (SPIRiT).

Journal ArticleDOI
TL;DR: Results show that a comparable image quality is achievable with a TAP reduction of ~ 40% in digital PET, which could lead to a significant reduction of the administered mass-activity and/or scan time with direct benefits in terms of dose exposure and patient comfort.
Abstract: We assessed and compared image quality obtained with clinical 18F-FDG whole-body oncologic PET protocols used in three different, state-of-the-art digital PET/CT and two conventional PMT-based PET/CT devices. Our goal was to evaluate an improved trade-off between administered activity (patient dose exposure/signal-to-noise ratio) and acquisition time (patient comfort) while preserving diagnostic information achievable with the recently introduced digital detector technology compared to previous analogue PET technology. We performed list-mode (LM) PET acquisitions using a NEMA/IEC NU2 phantom, with activity concentrations of 5 kBq/mL and 25 kBq/mL for the background (9.5 L) and sphere inserts, respectively. For each device, reconstructions were obtained varying the image statistics (10, 30, 60, 90, 120, 180, and 300 s from LM data) and the number of iterations (range 1 to 10) in addition to the employed local clinical protocol setup. We measured for each reconstructed dataset: the quantitative cross-calibration, the image noise on the uniform background assessed by the coefficient of variation (COV), and the recovery coefficients (RCs) evaluated in the hot spheres. Additionally, we compared the characteristic time-activity-product (TAP) that is the product of scan time per bed position × mass-activity administered (in min·MBq/kg) across datasets. Good system cross-calibration was obtained for all tested datasets with < 6% deviation from the expected value was observed. For all clinical protocol settings, image noise was compatible with clinical interpretation (COV < 15%). Digital PET showed an improved background signal-to-noise ratio as compared to conventional PMT-based PET. RCs were comparable between digital and PMT-based PET datasets. Compared to PMT-based PET, digital systems provided comparable image quality with lower TAP (from ~ 40% less and up to 70% less). This study compared the achievable clinical image quality in three state-of-the-art digital PET/CT devices (from different vendors) as well as in two conventional PMT-based PET. Reported results show that a comparable image quality is achievable with a TAP reduction of ~ 40% in digital PET. This could lead to a significant reduction of the administered mass-activity and/or scan time with direct benefits in terms of dose exposure and patient comfort.

Journal ArticleDOI
TL;DR: The proposed BCD-Net significantly improves CNR and RMSE of the reconstructed images compared to MBIR methods using non-trained regularizers, total variation (TV) and non-local means (NLM), and generalizes to test data that differs from the training data.
Abstract: Image reconstruction in low-count PET is particularly challenging because gammas from natural radioactivity in Lu-based crystals cause high random fractions that lower the measurement signal-to-noise-ratio (SNR). In model-based image reconstruction (MBIR), using more iterations of an unregularized method may increase the noise, so incorporating regularization into the image reconstruction is desirable to control the noise. New regularization methods based on learned convolutional operators are emerging in MBIR. We modify the architecture of an iterative neural network, BCD-Net , for PET MBIR, and demonstrate the efficacy of the trained BCD-Net using XCAT phantom data that simulates the low true coincidence count-rates with high random fractions typical for Y-90 PET patient imaging after Y-90 microsphere radioembolization. Numerical results show that the proposed BCD-Net significantly improves CNR and RMSE of the reconstructed images compared to MBIR methods using non-trained regularizers, total variation (TV) and non-local means (NLM). Moreover, BCD-Net successfully generalizes to test data that differs from the training data. Improvements were also demonstrated for the clinically relevant phantom measurement data where we used training and testing datasets having very different activity distributions and count-levels.

Journal ArticleDOI
TL;DR: The DLA trained with 50% simulated radiation dose showed the best overall image quality and DLAs achieved less noise than FBP and ADMIRE in LD CT images, but did not maintain spatial resolution.
Abstract: OBJECTIVE To compare the image quality of low-dose (LD) computed tomography (CT) obtained using a deep learning-based denoising algorithm (DLA) with LD CT images reconstructed with a filtered back projection (FBP) and advanced modeled iterative reconstruction (ADMIRE). MATERIALS AND METHODS One hundred routine-dose (RD) abdominal CT studies reconstructed using FBP were used to train the DLA. Simulated CT images were made at dose levels of 13%, 25%, and 50% of the RD (DLA-1, -2, and -3) and reconstructed using FBP. We trained DLAs using the simulated CT images as input data and the RD CT images as ground truth. To test the DLA, the American College of Radiology CT phantom was used together with 18 patients who underwent abdominal LD CT. LD CT images of the phantom and patients were processed using FBP, ADMIRE, and DLAs (LD-FBP, LD-ADMIRE, and LD-DLA images, respectively). To compare the image quality, we measured the noise power spectrum and modulation transfer function (MTF) of phantom images. For patient data, we measured the mean image noise and performed qualitative image analysis. We evaluated the presence of additional artifacts in the LD-DLA images. RESULTS LD-DLAs achieved lower noise levels than LD-FBP and LD-ADMIRE for both phantom and patient data (all p < 0.001). LD-DLAs trained with a lower radiation dose showed less image noise. However, the MTFs of the LD-DLAs were lower than those of LD-ADMIRE and LD-FBP (all p < 0.001) and decreased with decreasing training image dose. In the qualitative image analysis, the overall image quality of LD-DLAs was best for DLA-3 (50% simulated radiation dose) and not significantly different from LD-ADMIRE. There were no additional artifacts in LD-DLA images. CONCLUSION DLAs achieved less noise than FBP and ADMIRE in LD CT images, but did not maintain spatial resolution. The DLA trained with 50% simulated radiation dose showed the best overall image quality.

Journal ArticleDOI
TL;DR: A full-field approach to wave imaging based on the concept of the distortion matrix, which essentially connects any focal point inside the medium with the distortion that a wave front, emitted from that point, experiences due to heterogeneities.
Abstract: Focusing waves inside inhomogeneous media is a fundamental problem for imaging. Spatial variations of wave velocity can strongly distort propagating wave fronts and degrade image quality. Adaptive focusing can compensate for such aberration but is only effective over a restricted field of view. Here, we introduce a full-field approach to wave imaging based on the concept of the distortion matrix. This operator essentially connects any focal point inside the medium with the distortion that a wave front, emitted from that point, experiences due to heterogeneities. A time-reversal analysis of the distortion matrix enables the estimation of the transmission matrix that links each sensor and image voxel. Phase aberrations can then be unscrambled for any point, providing a full-field image of the medium with diffraction-limited resolution. Importantly, this process is particularly efficient in random scattering media, where traditional approaches such as adaptive focusing fail. Here, we first present an experimental proof of concept on a tissue-mimicking phantom and then, apply the method to in vivo imaging of human soft tissues. While introduced here in the context of acoustics, this approach can also be extended to optical microscopy, radar, or seismic imaging.

Journal ArticleDOI
TL;DR: The imaging and simulation results demonstrate the degradation of small animal PET resolution, and quantitative accuracy correlates with increasing positron energy; however, for a specific “benchmark” preclinical PET scanner and reconstruction workflow, these differences were observed to be minimal.
Abstract: The increasing interest and availability of non-standard positron-emitting radionuclides has heightened the relevance of radionuclide choice in the development and optimization of new positron emission tomography (PET) imaging procedures, both in preclinical research and clinical practice. Differences in achievable resolution arising from positron range can largely influence application suitability of each radionuclide, especially in small-ring preclinical PET where system blurring factors due to annihilation photon acollinearity and detector geometry are less significant. Some resolution degradation can be mitigated with appropriate range corrections implemented during image reconstruction, the quality of which is contingent on an accurate characterization of positron range. To address this need, we have characterized the positron range of several standard and non-standard PET radionuclides (As-72, F-18, Ga-68, Mn-52, Y-86, and Zr-89) through imaging of small-animal quality control phantoms on a benchmark preclinical PET scanner. Further, the Particle and Heavy Ion Transport code System (PHITS v3.02) code was utilized for Monte Carlo modeling of positron range-dependent blurring effects. Positron range kernels for each radionuclide were derived from simulation of point sources in ICRP reference tissues. PET resolution and quantitative accuracy afforded by various radionuclides in practicable imaging scenarios were characterized using a convolution-based method based on positron annihilation distributions obtained from PHITS. Our imaging and simulation results demonstrate the degradation of small animal PET resolution, and quantitative accuracy correlates with increasing positron energy; however, for a specific “benchmark” preclinical PET scanner and reconstruction workflow, these differences were observed to be minimal given radionuclides with average positron energies below ~ 400 keV. Our measurements and simulations of the influence of positron range on PET resolution compare well with previous efforts documented in the literature and provide new data for several radionuclides in increasing clinical and preclinical use. The results will support current and future improvements in methods for positron range corrections in PET imaging.

Proceedings ArticleDOI
30 Oct 2020
TL;DR: A countermeasure is proposed which can determine whether a detected object is a phantom or real using just the camera sensor and is demonstrated's effectiveness and robustness to adversarial machine learning attacks.
Abstract: In this paper, we investigate "split-second phantom attacks," a scientific gap that causes two commercial advanced driver-assistance systems (ADASs), Telsa Model X (HW 2.5 and HW 3) and Mobileye 630, to treat a depthless object that appears for a few milliseconds as a real obstacle/object. We discuss the challenge that split-second phantom attacks create for ADASs. We demonstrate how attackers can apply split-second phantom attacks remotely by embedding phantom road signs into an advertisement presented on a digital billboard which causes Tesla's autopilot to suddenly stop the car in the middle of a road and Mobileye 630 to issue false notifications. We also demonstrate how attackers can use a projector in order to cause Tesla's autopilot to apply the brakes in response to a phantom of a pedestrian that was projected on the road and Mobileye 630 to issue false notifications in response to a projected road sign. To counter this threat, we propose a countermeasure which can determine whether a detected object is a phantom or real using just the camera sensor. The countermeasure (GhostBusters) uses a "committee of experts" approach and combines the results obtained from four lightweight deep convolutional neural networks that assess the authenticity of an object based on the object's light, context, surface, and depth. We demonstrate our countermeasure's effectiveness (it obtains a TPR of 0.994 with an FPR of zero) and test its robustness to adversarial machine learning attacks.

Posted Content
TL;DR: This paper investigates a new perceptual challenge that causes the ADASs and autopilots of semi/fully autonomous cars to consider depthless objects (phantoms) as real and shows how attackers can exploit this perceptual challenge to apply phantom attacks and change the abovementioned balance.
Abstract: The absence of deployed vehicular communication systems, which prevents the advanced driving assistance systems (ADASs) and autopilots of semi/fully autonomous cars to validate their virtual perception regarding the physical environment surrounding the car with a third party, has been exploited in various attacks suggested by researchers. Since the application of these attacks comes with a cost (exposure of the attacker’s identity), the delicate exposure vs. application balance has held, and attacks of this kind have not yet been encountered in the wild. In this paper, we investigate a new perceptual challenge that causes the ADASs and autopilots of semi/fully autonomous to consider depthless objects (phantoms) as real. We show how attackers can exploit this perceptual challenge to apply phantom attacks and change the abovementioned balance, without the need to physically approach the attack scene, by projecting a phantom via a drone equipped with a portable projector or by presenting a phantom on a hacked digital billboard that faces the Internet and is located near roads. We show that the car industry has not considered this type of attack by demonstrating the attack on today’s most advanced ADAS and autopilot technologies: Mobileye 630 PRO and the Tesla Model X, HW 2.5; our experiments show that when presented with various phantoms, a car’s ADAS or autopilot considers the phantoms as real objects, causing these systems to trigger the brakes, steer into the lane of oncoming traffic, and issue notifications about fake road signs. In order to mitigate this attack, we present a model that analyzes a detected object’s context, surface, and reflected light, which is capable of detecting phantoms with 0.99 AUC. Finally, we explain why the deployment of vehicular communication systems might reduce attackers’ opportunities to apply phantom attacks but won’t eliminate them.

Journal ArticleDOI
TL;DR: This work presents a video-rate (20 Hz) dual-modality ultrasound and photoacoustic tomographic platform that has a high resolution, rich contrasts, deep penetration, and wide field of view, and GPU-based image reconstruction is developed to advance computational speed.
Abstract: Ultrasonography and photoacoustic tomography provide complementary contrasts in preclinical studies, disease diagnoses, and imaging-guided interventional procedures. Here, we present a video-rate (20 Hz) dual-modality ultrasound and photoacoustic tomographic platform that has a high resolution, rich contrasts, deep penetration, and wide field of view. A three-quarter ring-array ultrasonic transducer is used for both ultrasound and photoacoustic imaging. Plane-wave transmission/receiving approach is used for ultrasound imaging, which improves the imaging speed by nearly two folds and reduces the RF data size compared with the sequential single-channel scanning approach. GPU-based image reconstruction is developed to advance computational speed. We demonstrate fast dual-modality imaging in phantom, mouse, and human finger joint experiments. The results show respiration motion, heart beating, and detailed features in the mouse internal organs. To our knowledge, this is the first report on fast plane-wave ultrasound imaging and single-shot photoacoustic computed tomography in a ring-array system.

Journal ArticleDOI
TL;DR: The reconstructed permittivity images produced by the proposed 3D U-Net show that the network is not only able to remove the artifacts that are typical of CSI reconstructions, but it also enhances the detectability of the tumors.
Abstract: A deep learning technique to enhance 3D images of the complex-valued permittivity of the breast obtained via microwave imaging is investigated. The developed technique is an extension of one created to enhance 2D images. We employ a 3D Convolutional Neural Network, based on the U-Net architecture, that takes in 3D images obtained using the Contrast-Source Inversion (CSI) method and attempts to produce the true 3D image of the permittivity. The training set consists of 3D CSI images, along with the true numerical phantom images from which the microwave scattered field utilized to create the CSI reconstructions was synthetically generated. Each numerical phantom varies with respect to the size, number, and location of tumors within the fibroglandular region. The reconstructed permittivity images produced by the proposed 3D U-Net show that the network is not only able to remove the artifacts that are typical of CSI reconstructions, but it also enhances the detectability of the tumors. We test the trained U-Net with 3D images obtained from experimentally collected microwave data as well as with images obtained synthetically. Significantly, the results illustrate that although the network was trained using only images obtained from synthetic data, it performed well with images obtained from both synthetic and experimental data. Quantitative evaluations are reported using Receiver Operating Characteristics (ROC) curves for the tumor detectability and RMS error for the enhancement of the reconstructions.

Journal ArticleDOI
TL;DR: This work introduces an alternative 'farfield' endoscope, capable of imaging macroscopic objects across a large depth of field, and paves the way towards the exploitation of minimally-invasive holographic micro-endoscopes in clinical and diagnostics applications.
Abstract: Holographic wavefront manipulation enables converting hair-thin multimode optical fibres into minimally invasive lensless imaging instruments conveying much higher information densities than conventional endoscopes. Their most prominent applications focus on accessing delicate environments, including deep brain compartments, and recording micrometre-scale resolution images of structures in close proximity to the distal end of the instrument. Here, we introduce an alternative 'farfield' endoscope, capable of imaging macroscopic objects across a large depth of field. The endoscope shaft with dimensions of 0.2$\times$0.4 mm$^2$ consists of two parallel optical fibres, one for illumination and the second for signal collection. The system is optimized for speed, power efficiency and signal quality, taking into account specific features of light transport through step-index multimode fibres. The characteristics of imaging quality are studied at distances between 20 and 400 mm. As a proof-of-concept, we provide imaging inside the cavities of a sweet pepper commonly used as a phantom for biomedically relevant conditions. Further, we test the performance on a functioning mechanical clock, thus verifying its applicability in dynamically changing environments. With performance reaching the standard definition of video endoscopes, this work paves the way towards the exploitation of minimally-invasive holographic micro-endoscopes in clinical and diagnostics applications.

Journal ArticleDOI
TL;DR: In this article, the authors describe and validate a microwave antenna designed for an imaging device for the diagnosis and monitoring of cerebrovascular pathologies, which consists of a printed monopole immersed in a parallelepipedic block of semiflexible material with custom-permittivity.
Abstract: In this letter, we describe and validate a microwave antenna designed for an imaging device for the diagnosis and monitoring of cerebrovascular pathologies. The antenna consists of a printed monopole immersed in a parallelepipedic block of semiflexible material with custom-permittivity, which allows to avoid the use of liquid coupling media and enables a simple array arrangement. The “brick” is built with a mixture of urethane rubber and graphite powder. The $-$ 10 dB frequency band of the antenna is 800 MHz—1.2 GHz, in agreement with the device requirements. The designed brick antenna is assessed in terms of power penetration, reflection, and transmission coefficients. To show the performance of the antenna in the relevant application scenario, an experiment has been carried out on an anthropomorphic head phantom, measuring the differential signals between healthy state and hemorrhagic stroke mimicking condition for different antennas positions.

Journal ArticleDOI
TL;DR: A novel “free-running” (non-ECG triggered) cMRF framework for simultaneous myocardial T1 and T2 mapping and cardiac Cine imaging in a single scan is proposed and evaluated.