scispace - formally typeset
Search or ask a question
Topic

Imaging phantom

About: Imaging phantom is a research topic. Over the lifetime, 28170 publications have been published within this topic receiving 510003 citations. The topic is also known as: phantom.


Papers
More filters
Journal ArticleDOI
TL;DR: This work describes and validates a computationally efficient technique for noise map estimation directly from CT images, and an adaptive NLM filtering based on this noise map, on phantom and patient data.
Abstract: Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the shape and peak frequency of the noise power spectrum better than commercial smoothing kernels, and indicate that the spatial resolution at low contrast levels is not significantly degraded. Both the subjective evaluation using the ACR phantom and the objective evaluation on a low-contrast detection task using a CHO model observer demonstrate an improvement on low-contrast performance. The GPU implementation can process and transfer 300 slice images within 5 min. On patient data, the adaptive NLM algorithm provides more effective denoising of CT data throughout a volume than standard NLM, and may allow significant lowering of radiation dose. After a two week pilot study of lower dose CT urography and CT enterography exams, both GI and GU radiology groups elected to proceed with permanent implementation of adaptive NLM in their GI and GU CT practices. Conclusions: This work describes and validates a computationally efficient technique for noise map estimation directly from CT images, and an adaptive NLM filtering based on this noise map, on phantom and patient data. Both the noise map calculation and the adaptive NLM filtering can be performed in times that allow integration with clinical workflow. The adaptive NLM algorithm provides effective denoising of CT data throughout a volume, and may allow significant lowering of radiation dose.

235 citations

Journal ArticleDOI
TL;DR: This review identifies a clear progression of computational phantom complexity which can be denoted by three distinct generations, and explains an unexpected finding that the phantoms from the past 50 years followed a pattern of exponential growth.
Abstract: Radiation dose calculation using models of the human anatomy has been a subject of great interest to radiation protection, medical imaging, and radiotherapy. However, early pioneers of this field did not foresee the exponential growth of research activity as observed today. This review article walks the reader through the history of the research and development in this field of study which started some 50 years ago. This review identifies a clear progression of computational phantom complexity which can be denoted by three distinct generations. The first generation of stylized phantoms, representing a grouping of less than dozen models, was initially developed in the 1960s at Oak Ridge National Laboratory to calculate internal doses from nuclear medicine procedures. Despite their anatomical simplicity, these computational phantoms were the best tools available at the time for internal/external dosimetry, image evaluation, and treatment dose evaluations. A second generation of a large number of voxelized phantoms arose rapidly in the late 1980s as a result of the increased availability of tomographic medical imaging and computers. Surprisingly, the last decade saw the emergence of the third generation of phantoms which are based on advanced geometries called boundary representation (BREP) in the form of Non-Uniform Rational B-Splines (NURBS) or polygonal meshes. This new class of phantoms now consists of over 287 models including those used for non-ionizing radiation applications. This review article aims to provide the reader with a general understanding of how the field of computational phantoms came about and the technical challenges it faced at different times. This goal is achieved by defining basic geometry modeling techniques and by analyzing selected phantoms in terms of geometrical features and dosimetric problems to be solved. The rich historical information is summarized in four tables that are aided by highlights in the text on how some of the most well-known phantoms were developed and used in practice. Some of the information covered in this review has not been previously reported, for example, the CAM and CAF phantoms developed in 1970s for space radiation applications. The author also clarifies confusion about 'population-average' prospective dosimetry needed for radiological protection under the current ICRP radiation protection system and 'individualized' retrospective dosimetry often performed for medical physics studies. To illustrate the impact of computational phantoms, a section of this article is devoted to examples from the author's own research group. Finally the author explains an unexpected finding during the course of preparing for this article that the phantoms from the past 50 years followed a pattern of exponential growth. The review ends on a brief discussion of future research needs (a supplementary file '3DPhantoms.pdf' to figure 15 is available for download that will allow a reader to interactively visualize the phantoms in 3D).

235 citations

Journal ArticleDOI
TL;DR: An X-ray system with a large area detector has high scatter-to-primary ratios (SPRs), which result in severe artifacts in reconstructed computed tomography (CT) images, and a scatter correction algorithm is introduced that provides effective scatter correction but does not require additional patient exposure.
Abstract: An X-ray system with a large area detector has high scatter-to-primary ratios (SPRs), which result in severe artifacts in reconstructed computed tomography (CT) images. A scatter correction algorithm is introduced that provides effective scatter correction but does not require additional patient exposure. The key hypothesis of the algorithm is that the high-frequency components of the X-ray spatial distribution do not result in strong high-frequency signals in the scatter. A calibration sheet with a checkerboard pattern of semitransparent blockers (a "primary modulator") is inserted between the X-ray source and the object. The primary distribution is partially modulated by a high-frequency function, while the scatter distribution still has dominant low-frequency components, based on the hypothesis. Filtering and demodulation techniques suffice to extract the low-frequency components of the primary and hence obtain the scatter estimation. The hypothesis was validated using Monte Carlo (MC) simulation, and the algorithm was evaluated by both MC simulations and physical experiments. Reconstructions of a software humanoid phantom suggested system parameters in the physical implementation and showed that the proposed method reduced the relative mean square error of the reconstructed image in the central region of interest from 74.2% to below 1%. In preliminary physical experiments on the standard evaluation phantom, this error was reduced from 31.8% to 2.3%, and it was also demonstrated that the algorithm has no noticeable impact on the resolution of the reconstructed image in spite of the filter-based approach. Although the proposed scatter correction technique was implemented for X-ray CT, it can also be used in other X-ray imaging applications, as long as a primary modulator can be inserted between the X-ray source and the imaged object

233 citations

Journal ArticleDOI
TL;DR: A real time volumetric ultrasound imaging system has been developed for medical diagnosis that uses pulse-echo phased array principles to steer a two-dimensional array transducer of 289 elements in a pyramidal scan format.
Abstract: A real time volumetric ultrasound imaging system has been developed for medical diagnosis The scanner produces images analogous to an optical camera and supplies more information than conventional sonograms Potential medical applications include improved anatomic visualization, tumor localization, and better assessment of cardiac function The system uses pulse-echo phased array principles to steer a two-dimensional array transducer of 289 elements in a pyramidal scan format Parallel processing in the receive mode produces 4992 scan lines at a rate of approximately B frames/second Echo data for the scanned volume is presented as projection images with depth perspective, stereoscopic pairs, multiple tomographic images, or C-mode scans

233 citations

Journal ArticleDOI
TL;DR: An effective metal artifact-suppressing algorithm is implemented to improve the quality of CBCT images and was able to minimize the metal artifacts in phantom and patient studies.
Abstract: Purpose: Computed tomography (CT) streak artifacts caused by metallic implants remain a challenge for the automatic processing of image data. The impact of metal artifacts in the soft-tissue region is magnified in cone-beam CT (CBCT), because the soft-tissue contrast is usually lower in CBCT images. The goal of this study was to develop an effective offline processing technique to minimize the effect. Methods and Materials: The geometry calibration cue of the CBCT system was used to track the position of the metal object in projection views. The three-dimensional (3D) representation of the object can be established from only two user-selected viewing angles. The position of the shadowed region in other views can be tracked by projecting the 3D coordinates of the object. Automatic image segmentation was used followed by a Laplacian diffusion method to replace the pixels inside the metal object with the boundary pixels. The modified projection data were then used to reconstruct a new CBCT image. The procedure was tested in phantoms, prostate cancer patients with implanted gold markers and metal prosthesis, and a head-and-neck patient with dental amalgam in the teeth. Results: Both phantom and patient studies demonstrated that the procedure was able to minimize the metal artifacts. Soft-tissue visibility was improved near or away from the metal object. The processing time was 1–2 s per projection. Conclusion: We have implemented an effective metal artifact-suppressing algorithm to improve the quality of CBCT images.

232 citations


Network Information
Related Topics (5)
Iterative reconstruction
41.2K papers, 841.1K citations
89% related
Image quality
52.7K papers, 787.9K citations
88% related
Positron emission tomography
19.9K papers, 555.2K citations
82% related
Image resolution
38.7K papers, 736.5K citations
82% related
Detector
146.5K papers, 1.3M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231,623
20223,476
20211,221
20201,482
20191,568
20181,503