scispace - formally typeset
Search or ask a question

Showing papers on "Imaging phantom published in 1998"


Journal ArticleDOI
TL;DR: The authors present a realistic, high-resolution, digital, volumetric phantom of the human brain, which can be used to simulate tomographic images of the head and is the ideal tool to test intermodality registration algorithms.
Abstract: After conception and implementation of any new medical image processing algorithm, validation is an important step to ensure that the procedure fulfils all requirements set forth at the initial design stage. Although the algorithm must be evaluated on real data, a comprehensive validation requires the additional use of simulated data since it is impossible to establish ground truth with in vivo data. Experiments with simulated data permit controlled evaluation over a wide range of conditions (e.g., different levels of noise, contrast, intensity artefacts, or geometric distortion). Such considerations have become increasingly important with the rapid growth of neuroimaging, i.e., computational analysis of brain structure and function using brain scanning methods such as positron emission tomography and magnetic resonance imaging. Since simple objects such as ellipsoids or parallelepipedes do not reflect the complexity of natural brain anatomy, the authors present the design and creation of a realistic, high-resolution, digital, volumetric phantom of the human brain. This three-dimensional digital brain phantom is made up of ten volumetric data sets that define the spatial distribution for different tissues (e.g., grey matter, white matter, muscle, skin, etc.), where voxel intensity is proportional to the fraction of tissue within the voxel. The digital brain phantom can be used to simulate tomographic images of the head. Since the contribution of each tissue type to each voxel in the brain phantom is known, it can be used as the gold standard to test analysis algorithms such as classification procedures which seek to identify the tissue "type" of each image voxel. Furthermore, since the same anatomical phantom may be used to drive simulators for different modalities, it is the ideal tool to test intermodality registration algorithms. The brain phantom and simulated MR images have been made publicly available on the Internet (http://www.bic.mni.mcgill.ca/brainweb).

1,811 citations


Journal ArticleDOI
TL;DR: Results show that the introduction of soft-tissue structures and interventional instruments into the phantom image can have a large effect on the performance of some similarity measures previously applied to 2-D-3-D image registration.
Abstract: A comparison of six similarity measures for use in intensity-based two-dimensional-three-dimensional (2-D-3-D) image registration is presented. The accuracy of the similarity measures are compared to a "gold-standard" registration which has been accurately calculated using fiducial markers. The similarity measures are used to register a computed tomography (CT) scan of a spine phantom to a fluoroscopy image of the phantom. The registration is carried out within a region-of-interest in the fluoroscopy image which is user defined to contain a single vertebra. Many of the problems involved in this type of registration are caused by features which were not modeled by a phantom image alone. More realistic "gold-standard" data sets were simulated using the phantom image with clinical image features overlaid. Results show that the introduction of soft-tissue structures and interventional instruments into the phantom image can have a large effect on the performance of some similarity measures previously applied to 2-D-3-D image registration. Two measures were able to register accurately and robustly even when soft-tissue structures and interventional instruments were present as differences between the images. These measures were pattern intensity and gradient difference. Their registration accuracy, for all the rigid-body parameters except for the source to film translation, was within a root-mean-square (rms) error of 0.53 mm or degrees to the "gold-standard" values. No failures occurred while registering using these measures.

912 citations


Journal Article
TL;DR: A new algorithm to correct for PVEs by characterizing the geometric interaction between the PET system and the brain activity distribution, which allows the correction for Pves simultaneously in all identified brain regions, independent of tracer levels.
Abstract: The accuracy of PET for measuring regional radiotracer concentra tions in the human brain is limited by the finite resolution capability of the scanner and the resulting partial volume effects (PVEs). We designed a new algorithm to correct for PVEs by characterizing the geometric interaction between the PETsystem and the brain activity distribution. Methods: The partial volume correction (PVC) algo rithm uses high-resolution volumetric MR Âimages correlated with the PET volume. We used a PET simulator to calculate recovery and cross-contamination factors of identified tissue components in the brain model. These geometry-dependent transfer coefficients form a matrix representing the fraction of true activity from each distinct brain region observed in any given set of regions of interest. This matrix can be inverted to correct for PVEs,independent of the tracer concentrations in each tissue component. A sphere phantom was used to validate the simulated point-spread function of the PET scanner. Accuracy and precision of the PVC method were assessed using a human basal ganglia phantom. A constant contrast experi ment was performed to explore the recovery capability and statistic error propagation of PVC in various noise conditions. In addition, a dual-isotope experiment was used to evaluate the ability of the PVC algorithm to recover activity concentrations in small structures surrounded by background activity with a different radioactive half-life. This models the time-variable contrast between regions that is often seen in neuroreceptor studies. Results: Data from the three-dimensional brain phantom demonstrated a full recovery ca pability of PVC with less than 10% root mean-square error in terms of absolute values, which decreased to less than 2% when results from four PET slices were averaged. Inaccuracy in the estimation of 18Ftracer half-life in the presence of11C background activity was in the range of 25%-50% before PVC and 0%-6% after PVC, for resolution varying from 6 to 14 mm FWHM. In terms of noise propagation, the degradation of the coefficient of variation after PVC was found to be easily predictable and typically on the order of 25%. Conclusion: The PVC algorithm allows the correction for PVEs simultaneously in all identified brain regions, independent of tracer levels.

897 citations



Journal ArticleDOI
TL;DR: An adaptive filtering approach in Radon space based on the local statistical properties of the CT projections, which is effective in reducing or eliminating quantum noise induced artifacts in CT.
Abstract: The quality of a computed tomography (CT) image is often degraded by streaking artifacts resulting from excessive x-ray quantum noise. Often, a patient has to be rescanned at a higher technique or at a larger slice thickness in order to obtain an acceptable image for diagnosis. This results in a higher dose to the patient, a degraded cross plane resolution, or a reduced patient throughput. In this paper, we propose an adaptive filtering approach in Radon space based on the local statistical properties of the CT projections. We first model the noise characteristics of a projection sample undergoing important preprocessing steps. A filter is then designed such that its parameters are dynamically adjusted to adapt to the local noise characteristics. Because of the adaptive nature of the filter, a proper balance between streak artifact suppression and spatial resolution preservation is achieved. Phantom and clinical studies have been conducted to evaluate the robustness of our approach. Results demonstrate that the adaptive filtering approach is effective in reducing or eliminating quantum noise induced artifacts in CT. At the same time, the impact on the spatial resolution is kept at a low level.

353 citations


Journal ArticleDOI
TL;DR: The use of the focus as a virtual element is examined, and the virtual source has been shown to exhibit the same behavior as an actual transducer element in response to synthetic aperture processing techniques.
Abstract: A new imaging technique has been proposed that combines conventional B-mode and synthetic aperture imaging techniques to overcome the limited depth of field for a highly focused transducer. The new technique improves lateral resolution beyond the focus of the transducer by considering the focus a virtual element and applying synthetic aperture focusing techniques. In this paper, the use of the focus as a virtual element is examined, considering the issues that are of concern when imaging with an array of actual elements: the tradeoff between lateral resolution and sidelobe level, the tradeoff between system complexity (channel count/amount of computation) and the appearance of grating lobes, and the issue of signal to noise ratio (SNR) of the processed image. To examine these issues, pulse-echo RF signals were collected for a tungsten wire in degassed water, monofilament nylon wires in a tissue-mimicking phantom, and cyst targets in the phantom. Results show apodization lowers the sidelobes, but only at the expense of lateral resolution, as is the case for classical synthetic aperture imaging. Grating lobes are not significant until spatial sampling is more than one wavelength, when the beam is not steered. Resolution comparable to the resolution at the transducer focus can be achieved beyond the focal region while obtaining an acceptable SNR. Specifically, for a 15-MHz focused transducer, the 6-dB beamwidth at the focus is 157 /spl mu/m, and with synthetic aperture processing the 6-dB beamwidths at 3, 5, and 7 mm beyond the focus are 189 /spl mu/m, 184 /spl mu/m, and 215 /spl mu/m, respectively. The image SNR is 38.6 dB when the wire is at the focus, and it is 32.8 dB, 35.3 dB, and 38.1 dB after synthetic aperture processing when the wire is 3, 5, and 7 mm beyond the focus, respectively. With these experiments, the virtual source has been shown to exhibit the same behavior as an actual transducer element in response to synthetic aperture processing techniques.

313 citations


Journal ArticleDOI
01 Nov 1998
TL;DR: The results obtained from phantom imaging experiments and from cardiac studies in nine volunteers indicate that the self-calibrating approach is an effective method to increase the potential and the flexibility of rapid imaging with SMASH.
Abstract: Recently a new fast magnetic resonance imaging strategy, SMASH, has been described, which is based on partially parallel imaging with radiofrequency coil arrays. In this paper, an internal sensitivity calibration technique for the SMASH imaging method using self-calibration signals is described. Coil sensitivity information required for SMASH imaging is obtained during the actual scan using correlations between undersampled SMASH signal data and additionally sampled calibration signals with appropriate offsets in k-space. The advantages of this sensitivity reference method are that no extra coil array sensitivity maps have to be acquired and that it provides coil sensitivity information in areas of highly non-uniform spin-density. This auto-calibrating approach can be easily implemented with only a small sacrifice of the overall time savings afforded by SMASH imaging. The results obtained from phantom imaging experiments and from cardiac studies in nine volunteers indicate that the self-calibrating approach is an effective method to increase the potential and the flexibility of rapid imaging with SMASH.

270 citations


Journal ArticleDOI
08 Nov 1998
TL;DR: In this article, a hybrid of realistic patient-based phantoms and flexible geometry based phants is presented for use in medical imaging research, where the surfaces of heart structures are defined using non-uniform rational B-splines (NURBS), as used in 3D computer graphics.
Abstract: We develop a realistic computerized heart phantom for use in medical imaging research. This phantom is a hybrid of realistic patient-based phantoms and flexible geometry-based phantoms. The surfaces of heart structures are defined using non-uniform rational B-splines (NURBS), as used in 3D computer graphics. The NURBS primitives define continuous surfaces allowing the phantom to be defined at any resolution. Also, by fitting NURBS to patient data, the phantom is more realistic than those based on solid geometry. An important innovation is the extension of NURBS to the fourth dimension, time, to model heart motion. Points on the surfaces of heart structures were selected from a gated MRI study of a normal patient. Polygon surfaces were fit to the points for each time frame, and smoothed. 3D NURBS surfaces were fit to the smooth polygon surfaces and then a 4D NURBS surface was fit through these surfaces. Each of the principal 4D surfaces (atria, ventricles, inner and outer walls) contains approximately 200 control points, We conclude that 4D NURBS are an efficient and flexible way to describe the heart and other anatomical objects for a realistic phantom.

266 citations


Journal ArticleDOI
TL;DR: A theoretical background and experimental method that allows a separation of intrinsic, tissue‐matrix‐specific magnetic‐field inhomogeneity effects from both macroscopic and microscopic inhomogeneities is proposed, offering the potential to assess a variety of tissue parameters.
Abstract: A theoretical background and experimental method that allows a separation of intrinsic, tissue-matrix-specific magnetic-field inhomogeneity effects from both macroscopic (large compared with voxel dimensions) and microscopic (on the order of molecular dimensions) inhomogeneities is proposed. Such separation allows one to take full advantage of these tissue-matrix-specific magnetic field inhomogeneity effects to extract information about tissue structure. A method to measure the volume fraction occupied by the susceptibility-perturbing component in a tissue matrix, the R2' relaxation rate constant, and the susceptibility difference between the bulk component and the susceptibility-perturbing component in a tissue matrix has been developed and tested on phantoms. This method offers the potential to assess a variety of tissue parameters, including cerebral blood volume, blood volume and blood oxygenation-level changes in functional MRI, the structure of trabecular bone, and other physiologically important issues.

245 citations


Journal ArticleDOI
TL;DR: Low absorbing details within breast tissue, invisible with conventional techniques, are detected by means of the proposed phase contrast imaging method, and the use of a bending magnet radiation source relaxes the previously reported requirements on source size.
Abstract: Phase contrast x-ray imaging is a powerful technique for the detection of low-contrast details in weakly absorbing objects. This method is of possible relevance in the field of diagnostic radiology. In fact, imaging low-contrast details within soft tissue does not give satisfactory results in conventional x-ray absorption radiology, mammography being a typical example. Nevertheless, up to now all applications of the phase contrast technique, carried out on thin samples, have required radiation doses substantially higher than those delivered in conventional radiological examinations. To demonstrate the applicability of the method to mammography we produced phase contrast images of objects a few centimetres thick while delivering radiation doses lower than or comparable to doses needed in standard mammographic examinations (typically mean glandular dose (MGD)). We show images of a custom mammographic phantom and of two specimens of human breast tissue obtained at the SYRMEP bending magnet beamline at Elettra, the Trieste synchrotron radiation facility. The introduction of an intensifier screen enabled us to obtain phase contrast images of these thick samples with radiation doses comparable to those used in mammography. Low absorbing details such as thick nylon wires or thin calcium deposits within breast tissue, invisible with conventional techniques, are detected by means of the proposed method. We also find that the use of a bending magnet radiation source relaxes the previously reported requirements on source size for phase contrast imaging. Finally, the consistency of the results has been checked by theoretical simulations carried out for the purposes of this experiment.

242 citations


Journal ArticleDOI
10 Jul 1998-Science
TL;DR: In this paper, a new method for magnetic resonance imaging (MRI) based on the detection of relatively strong signal from intermolecular zero-quantum coherences (iZQCs) is reported.
Abstract: A new method for magnetic resonance imaging (MRI) based on the detection of relatively strong signal from intermolecular zero-quantum coherences (iZQCs) is reported. Such a signal would not be observable in the conventional framework of magnetic resonance; it originates in long-range dipolar couplings (10 micrometers to 1 millimeter) that are traditionally ignored. Unlike conventional MRI, where image contrast is based on variations in spin density and relaxation times (often with injected contrast agents), contrast with iZQC images comes from variations in the susceptibility over a distance dictated by gradient strength. Phantom and in vivo (rat brain) data confirm that iZQC images give contrast enhancement. This contrast might be useful in the detection of small tumors, in that susceptibility correlates with oxygen concentration and in functional MRI.

Journal ArticleDOI
TL;DR: The results obtained from phantom imaging experiments and from cardiac studies in nine volunteers indicate that the self-calibrating approach is an effective method to increase the potential and the flexibility of rapid imaging with SMASH.
Abstract: Recently a new fast magnetic resonance imaging strategy, SMASH, has been described, which is based on partially parallel imaging with radiofrequency coil arrays. In this paper, an internal sensitivity calibration technique for the SMASH imaging method using self-calibration signals is described. Coil sensitivity information required for SMASH imaging is obtained during the actual scan using correlations between undersampled SMASH signal data and additionally sampled calibration signals with appropriate offsets ink-space. The advantages of this sensitivity reference method are that no extra coil array sensitivity maps have to be acquired and that it provides coil sensitivity information in areas of highly non-uniform spin-density. This auto-calibrating approach can be easily implemented with only a small sacrifice of the overall time savings afforded by SMASH imaging. The results obtained from phantom imaging experiments and from cardiac studies in nine volunteers indicate that the self-calibrating approach is an effective method to increase the potential and the flexibility of rapid imaging with SMASH.

Journal ArticleDOI
TL;DR: A graphical user interface has been developed that automatically sets up the MCNP4A geometry and radiation source requirements for a three-dimensional Monte Carlo simulation using computed tomography data.
Abstract: The Los Alamos code MCNP4A (Monte Carlo N-Particle version 4A) is currently used to simulate a variety of problems ranging from nuclear reactor analysis to boron neutron capture therapy. A graphical user interface has been developed that automatically sets up the MCNP4A geometry and radiation source requirements for a three-dimensional Monte Carlo simulation using computed tomography data. The major drawback for this dosimetry system is the amount of time to obtain a statistically significant answer. A specialized patch file has been developed that optimizes photon particle transport and dose scoring within the standard MCNP4A lattice geometry. The transport modifications produce a performance increase (number of histories per minute) of approximately 4.7 based upon a 6 MV point source centered within a 30 x 30 x 30 cm3 lattice water phantom and 1 x 1 x 1 mm3 voxels. The dose scoring modifications produce a performance increase of approximately 470 based upon a tally section of greater than 1 x 10(4) lattice elements and a voxel size of 5 mm3. Homogeneous and heterogeneous benchmark calculations produce good agreement with measurements using a standard water phantom and a high- and low-density heterogeneity phantom. The dose distribution from a typical mediastinum treatment planning setup is presented for qualitative analysis and comparison versus a conventional treatment planning system.

Journal ArticleDOI
TL;DR: The chemical reaction kinetics, the dose sensitivity and spatial resolution (< 1 mm3) obtained by optical absorption computed tomography, and the sample dose distributions produced by "cross-field" 6 MV x-ray beams are reported.
Abstract: In recent years, magnetic-resonance imaging of gelatin doped with the Fricke solution has been applied to the direct measurement of three-dimensional (3D) radiation dose distributions. However, the 3D dose distribution can also be imaged more economically and efficiently using the method of optical absorptioncomputed tomography. This is accomplished by first preparing a gelatin matrix containing a radiochromic dye and mapping the radiation-induced local change in the optical absorption coefficient. Ferrous–Benzoic–Xylenol (FBX) was the dye of choice for this investigation. The complex formed by Fe 3+ and xylenol orange exhibits a linear change in optical attenuation ( cm −1 ) with radiation dose in the range between 0 and 1000 cGy, and the local concentration of this complex can be probed using a green laser light (λ=543.5 nm). An optical computed tomography(CT) scanner was constructed analogous to a first-generation x-rayCT scanner, using a He–Ne laser, photodiodes, and rotation–translation stages controlled by a personal computer. The optical CT scanner itself can reconstructattenuation coefficients to a baseline accuracy of 2% while yielding dose images accurate to within 5% when other uncertainties are taken into account. Optical tomography is complicated by the reflection and refraction of light rays in the phantom materials, producing a blind spot in the transmission profiles which, results in a significant dose artifact in the reconstructed images. In this report we develop corrections used to reduce this artifact and yield accurate dosimetric maps. We also report the chemical reaction kinetics, the dose sensitivity and spatial resolution (<1 mm3) obtained by optical absorptioncomputed tomography. The article concludes with sample dose distributions produced by “cross-field” 6 MV x-ray beams, including a radiosurgery example.

Journal Article
TL;DR: In this article, the authors used the magnetic field generated by the injected currents, for the purpose of reconstructing the conductivity distribution, and calculated the sensitivity matrix relating the magnetic fields to the element conductivities using the Finite Element Method and Biot-Savart law.
Abstract: In two dimensional conventional Electrical Impedance Tomography (EIT), volume conductor is probed by means of injected currents, and peripheral voltage measurements are used as input to the reconstruction algorithm. The current that flows in the 2D object creates magnetic fields that are perpendicular to the plane of imaging. Such magnetic fields can be measured using magnetic resonance tomography. In this study, use of this magnetic field generated by the injected currents, for the purpose of reconstructing the conductivity distribution, is studied. Sensitivity matrix relating the magnetic field to the element conductivities is calculated using the Finite Element Method and Biot-Savart law. Linearization is made during sensitivity matrix formation. This matrix is inverted using singular value decompostion. Simulations for objects placed in different parts of the imaging region are made to understand the spatial dependency of the proposed method and it is seen that the method has uniform sensitivity throughout the imaging region. Finally, images reconstructed using data taken from an experimental phantom are presented.

Journal ArticleDOI
TL;DR: A method is introduced to measure internal mechanical displacement and strain by means of MRI to provide a means for remote palpation and elasticity quantitation in deep tissues otherwise inaccessible to manual palpation.
Abstract: A method is introduced to measure internal mechanical displacement and strain by means of MRI. Such measurements are needed to reconstruct an image of the elastic Young's modulus. A stimulated echo acquisition sequence with additional gradient pulses encodes internal displacements in response to an externally applied differential deformation. The sequence provides an accurate measure of static displacement by limiting the mechanical transitions to the mixing period of the simulated echo. Elasticity reconstruction involves definition of a region of interest having uniform Young's modulus along its boundary and subsequent solution of the discretized elasticity equilibrium equations. Data acquisition and reconstruction were performed on a urethane rubber phantom of known elastic properties and an ex vivo canine kidney phantom using <2% differential deformation. Regional elastic properties are well represented on Young's modulus images. The long-term objective of this work is to provide a means for remote palpation and elasticity quantitation in deep tissues otherwise inaccessible to manual palpation.

Journal ArticleDOI
TL;DR: Calculations of current density in a fine-resolution (2 mm) anatomically realistic voxel model of the human body for uniform magnetic fields incident from the front, side and top of the body for frequencies from 50 Hz to 10 MHz are presented.
Abstract: This paper presents calculations of current density in a fine-resolution (2 mm) anatomically realistic voxel model of the human body for uniform magnetic fields incident from the front, side and top of the body for frequencies from 50 Hz to 10 MHz. The voxel phantom, NORMAN, has a height of 1.76 m and a mass of 73 kg. There are 8.3 million voxels in the body differentiated into 37 tissue types. Both the impedance method and the scalar potential finite difference method were used to provide mutual corroboration. Results are presented for the current density averaged over 1 cm2 in muscle, heart, brain and retina.

Journal ArticleDOI
Jian-yu Lu1
TL;DR: The quality (resolution and contrast) of constructed images is virtually identical for both methods, except that the Fourier method is simpler to implement.
Abstract: Limited diffraction beams have a large depth of field and have many potential applications. Recently, a new method (Fourier method) was developed with limited diffraction beams for image construction. With the method and a single plane wave transmission, both 2D (two-dimensional) and 3D (three-dimensional) images of a very high frame rate (up to 3750 frames/s for a depth of 200 mm in biological soft tissues) and a high signal-to-noise ratio (SNR) can be constructed with relatively simple and inexpensive hardware. If limited diffraction beams of different parameters are used in both transmission and reception and transducer aperture is shaded with a cosine function, high-resolution and low-sidelobe images can be constructed with the new method without montage of multiple frames of images [the image quality is comparable to that obtained with a transmit-receive (two-way) dynamically focused imaging system]. In this paper, the Fourier method was studied with both experiment and computer simulation for 2D B-mode imaging. In the experiment, two commercial broadband 1D array transducers (48 and 64 elements) of different aperture sizes (18.288 and 38.4 mm) and center frequencies (2.25 and 2.5 MHz) were used to construct images of different viewing sizes. An ATS539 tissue-equivalent phantom of an average frequency-dependent attenuation of 0.5 dB/MHz/cm was used as a test object. To obtain high frame rate images, a single plane wave pulse (broadband) was transmitted with the arrays. Echoes received with the arrays were processed with both the Fourier and conventional dynamic focusing (delay-and-sum) methods to construct 2D B-mode images. Results show that the quality (resolution and contrast) of constructed images is virtually identical for both methods, except that the Fourier method is simpler to implement. Both methods have also a similar sensitivity to phase aberration distortions. Excellent agreement among theory, simulation, and experiment was obtained.

Journal ArticleDOI
TL;DR: A similar image quality to the current single-slice MVCT scanner is achieved with the advantage of providing tens of tomographic slices for a single gantry rotation.

Journal ArticleDOI
TL;DR: A set of backscatter factors was selected and proposed for adoption as a standard set for the calibration of dosimeters to be used to measure diagnostic reference doses.
Abstract: Backscatter factors were determined for x-ray beams relevant to diagnostic radiology using Monte Carlo methods. The phantom size considered most suitable for calibration of dosimeters is a cuboid of 30 x 30 cm2 front surface and 15 cm depth. This phantom size also provides a good approximation to adult patients. Three different media were studied: water, PMMA and ICRU tissue; the source geometry was a point source with varying field size and source-to-phantom distance. The variations of the backscatter factor with phantom medium and field geometry were examined. From the obtained data, a set of backscatter factors was selected and proposed for adoption as a standard set for the calibration of dosimeters to be used to measure diagnostic reference doses.

Journal ArticleDOI
TL;DR: In both phantom and human treatments, temperature measured via corrected phase difference closely tracked measurements obtained with fiberoptic probes during the hyperthermia treatments.
Abstract: Purpose: To determine the feasibility of measuring temperature noninvasively with magnetic resonance imaging during hyperthermia treatment of human tumors. Methods: The proton chemical shift detected using phase-difference magnetic resonance imaging (MRI) was used to measure temperature in phantoms and human tumors during treatment with hyperthermia. Four adult patients having high-grade primary sarcoma tumors of the lower leg received 5 hyperthermia treatments in the MR scanner using an MRI-compatible radiofrequency heating applicator. Prior to each treatment, an average of 3 fiberoptic temperature probes were invasively placed into the tumor (or phantom). Hyperthermia was applied concurrent with MR thermometry. Following completion of the treatment, regions of interest (ROI) were defined on MR phase images at each temperature probe location, in bone marrow, and in gel standards placed outside the heated region. The median phase difference (compared to pretreatment baseline images) was calculated for each ROI. This phase difference was corrected for phase drift observed in standards and bone marrow. The observed phase difference, with and without corrections, was correlated with the fiberoptic temperature measurements. Results: The phase difference observed with MRI was found to correlate with temperature. Phantom measurements demonstrated a linear regression coefficient of 4.70° phase difference per ° Celsius, with an R2 = 0.998. After human images with artifact were excluded, the linear regression demonstrated a correlation coefficient of 5.5° phase difference per ° Celsius, with an R2 = 0.84. In both phantom and human treatments, temperature measured via corrected phase difference closely tracked measurements obtained with fiberoptic probes during the hyperthermia treatments. Conclusions: Proton chemical shift imaging with current MRI and hyperthermia technology can be used to monitor and control temperature during treatment of large tumors in the distal lower extremity.

Journal ArticleDOI
TL;DR: Two new methods that can be used in conjunction with existing methods to achieve marked reductions in RBSC reconstruction times are proposed and significantly accelerates the scatter model by exploiting the fact that scatter is dominated by low- frequencies.
Abstract: Accurate scatter compensation in SPECT can be performed by modelling the scatter response function during the reconstruction process. This method is called reconstruction-based scatter compensation (RBSC). It has been shown that RBSC has a number of advantages over other methods of compensating for scatter, but using RBSC for fully 3D compensation has resulted in prohibitively long reconstruction times. In this work we propose two new methods that can be used in conjunction with existing methods to achieve marked reductions in RBSC reconstruction times. The first method, coarse-grid scatter modelling, significantly accelerates the scatter model by exploiting the fact that scatter is dominated by low-frequency information. The second method, intermittent RBSC, further accelerates the reconstruction process by limiting the number of iterations during which scatter is modelled. The fast implementations were evaluated using a Monte Carlo simulated experiment of the 3D MCAT phantom with tracer, and also using experimentally acquired data with tracer. Results indicated that these fast methods can reconstruct, with fully 3D compensation, images very similar to those obtained using standard RBSC methods, and in reconstruction times that are an order of magnitude shorter. Using these methods, fully 3D iterative reconstruction with RBSC can be performed well within the realm of clinically realistic times (under 10 minutes for image reconstruction).

Journal ArticleDOI
TL;DR: Dosimetric verification is a critical step in the quality assurance (QA) of IMRT and Hybrid Verification III is suggested as a preliminary quality standard for IMRT.
Abstract: Purpose: To verify that optimized dose distributions provided by an intensity-modulated radiation therapy (IMRT) system are delivered accurately to human patients. Methods and Materials: Anthropomorphic phantoms are used to measure IMRT doses. Four types of verification are developed for: I) system commissioning with beams optimized to irradiate simulated targets in phantoms, II) plans with patient-optimized beams directed to phantoms simulating the patient, III) patient–phantom hybrid plans with patient-optimized beams calculated in phantom without further optimization, and IV) in vivo measurements. Phantoms containing dosimeters are irradiated with patient-optimized beams. Films are scanned and data were analyzed with software. Percent difference between verified and planned maximum target doses is defined as "dose discrepancy" (Δ vp ). The frequency distribution of type II Δ vp from 204 verification films of 92 IMRT patients is fit to a Gaussian. Measurements made in vivo yield discrepancies specified as Δ ivp , also fit to a Gaussian. Results and Discussion: Verification methods revealed three systematic errors in plans that were corrected prior to treatment. Values of |Δ vp | for verification type I are vp | vp | >5% arise from differences between phantom and patient geometry, and from simulation, calculation, and other errors. Values of |Δ vp | for verification III are less than half of the values of |Δ vp | for verification II. A Gaussian fit of Δ ivp from verification IV shows more discrepancy than the fit of Δ vp , attributed to dose gradients in detectors, and exacerbated by immobilization uncertainty. Conclusions: Dosimetric verification is a critical step in the quality assurance (QA) of IMRT. Hybrid Verification III is suggested as a preliminary quality standard for IMRT.

Journal ArticleDOI
TL;DR: Examining the conditions that limit the effectiveness of 2-D local companding through a series of experiments using phantoms with tissue-like acoustic and elasticity properties found that strain noise remained relatively unchanged as the applied compression increased to 5% of the phantom height, while target contrast increased in proportion to the compression.
Abstract: Companding may be used as a technique for generating low-noise strain images. It involves warping radio-frequency echo fields in two dimensions and at several spatial scales to minimize decorrelation errors in correlation-based displacement estimates. For the appropriate experimental conditions, companding increases the sensitivity and dynamic range of strain images without degrading contrast or spatial resolution significantly. In this paper, we examine the conditions that limit the effectiveness of 2-D local companding through a series of experiments using phantoms with tissue-like acoustic and elasticity properties. We found that strain noise remained relatively unchanged as the applied compression increased to 5% of the phantom height, while target contrast increased in proportion to the compression. Controlling the image noise at high compressions improves target visibility over the broad range induced in elastically heterogeneous media, such as biological tissues. Compressions greater than 5% introduce large strains and complex motions that reduce the effectiveness of companding. Control of boundary conditions and ultrasonic data sampling rates is critical for a successful implementation of our algorithms.

Journal ArticleDOI
Ping He1
TL;DR: A method is proposed to simulate the propagation of a broadband ultrasound pulse in a lossy medium whose attenuation exhibits a power law frequency dependence using a bank of Gaussian filters and a time causal model.
Abstract: A method is proposed to simulate the propagation of a broadband ultrasound pulse in a lossy medium whose attenuation exhibits a power law frequency dependence. Using a bank of Gaussian filters, the broadband pulse is first decomposed into narrowband components. The effects of the attenuation and dispersion are then applied to each component based on the superposition principle. When the bandwidth of each component is narrow enough, these effects can be evaluated at the center frequency of the component, resulting in a magnitude reduction, a constant phase angle lag, and a relative time delay. The accuracy of the proposed method is tested by comparing the model-produced pulses with the experimentally measured pulses using two different phantoms. The first phantom has an attenuation function which exhibits a nearly linear frequency dependence. The second phantom has an attenuation function which exhibits a nearly quadratic frequency dependence. In deriving the dispersion from the measured attenuation, a nearly local model and a time causal model are used. For linear attenuation, the two models converge and both predict accurately the waveform of the transmitted pulse. For nonlinear attenuation, the time causal model is found more accurate than the nearly local model in predicting the waveform of the transmitted pulse.

Journal ArticleDOI
TL;DR: The conclusion is that this instrument is a useful tool for quick and reliable quality control of proton beams and other dynamic treatment modalities because of the long integration-time capabilities of the system.
Abstract: A quality control system especially designed for dosimetry in scanning proton beams has been designed and tested. The system consists of a scintillating screen (Gd2O2S:Tb), mounted at the beam-exit side of a phantom, and observed by a low noise CCD camera with a long integration time. The purpose of the instrument is to make a fast and accurate two-dimensional image of the dose distribution at the screen position in the phantom. The linearity of the signal with the dose, the noise in the signal, the influence of the ionization density on the signal, and the influence of the field size on the signal have been investigated. The spatial resolution is 1.3 mm (1 s.d.), which is sufficiently smaller than typical penumbras in dose distributions, The measured yield depends linearly on the dose and agrees within 5% with the calculations. In the images a signal to noise ration (signal/l s.d.) of 10(2) has been found, which is in the same order of magnitude as expected from the calculations. At locations in the dose distribution possessing a strong contribution of high ionization densities (i.e., in the Bragg peak), we found some quenching of the light output, which can be described well by existing models if the beam characteristics are known. For clinically used beam characteristics such as a Spread Out Bragg peak, there is at most 8% deviation from the NACP ionization chamber measurements. The conclusion is that this instrument is a useful tool for quick and reliable quality control of proton beams. The long integration-time capabilities of the system make it worthwhile to investigate its applicability in scanning proton beams and other dynamic treatment modalities. (C) 1998 American Association of Physicists in Medicine. [S0094-2405(98)02104-X].

Journal ArticleDOI
TL;DR: A method for the direct estimation of the longitudinal speed of sound in a medium through analysis of pulse-echo data received across a single transducer array following a single transmission, and is analogous to methods used in exploration seismology.
Abstract: A method for the direct estimation of the longitudinal speed of sound in a medium is presented. This estimator derives the speed of sound through analysis of pulse-echo data received across a single transducer array following a single transmission, and is analogous to methods used in exploration seismology. A potential application of this estimator is the dynamic correction of beamforming errors in medical imaging that result from discrepancy between the assumed and actual biological tissue velocities. The theoretical basis of this estimator is described and its function demonstrated in phantom experiments. Using a wire target, sound-speed estimates in water, methanol, ethanol, and n-butanol are compared to published values. Sound-speed estimates in two speckle-generating phantoms are also compared to expected values. The mean relative errors of these estimates are all less than 0.4%, and under the most ideal experimental conditions are less than 0.1%. The relative errors of estimates based on independent regions of speckle-generating phantoms have a standard deviation on the order of 0.5%. Simulation results showing the relative significance of potential sources of estimate error are presented. The impact of sound-speed errors on imaging and the potential of this estimator for phase aberration correction and tissue characterization are also discussed.

Journal ArticleDOI
TL;DR: Phantom accuracy does not ensure accuracy in vivo, because phantoms may have a more homogeneous B1 field and a longer T2 than do biological samples, and any T1 method that relies on accurate flip angles might have a significant systematic error in vivo.

Journal ArticleDOI
TL;DR: Radial dose functions, dose rate constant and anisotropy functions, utilized in the AAPM Task Group 43 dose estimation formalism, have been calculated and found to depend significantly on phantom dimensions at radial distances near phantom edges, indicating that body dimensions should be taken into account in treatment planning when the absorbed dose is calculated near body edges.
Abstract: An analytical Monte Carlo simulation code has been used to perform dosimetry calculations around an 192 Ir high dose-ratebrachytherapysource utilized in the widely used microSelectron afterloaded system. Radial dose functions, dose rate constant and anisotropy functions, utilized in the AAPM Task Group 43 dose estimation formalism, have been calculated. In addition, measurements of anisotropy functions using LiF TLD-100 rods have been performed in a polystyrene phantom to support our Monte Carlo calculations. The energy dependence of LiF TLD response was investigated over the whole range of measurement distances and angles. TLD measurements and Monte Carlo calculations are in agreement to each other and agree with published data. The influence of phantom dimensions on calculations was also investigated. Radial dose functions were found to depend significantly on phantom dimensions at radial distances near phantom edges. Deviations of up to 25% are observed at these distances due to the lack of full scattering conditions, indicating that body dimensions should be taken into account in treatment planning when the absorbed dose is calculated near body edges. On the other hand, anisotropy functions do not demonstrate a strong dependence on phantom dimensions. However, these functions depend on radial distance at angles close to the longitudinal axis of the source, where deviations of up to 20% are observed.

Journal ArticleDOI
TL;DR: In this paper, a series of diffusion‐weighted fast spin‐echo sequences with a new motion correction scheme based on the navigator echo technique are introduced, which provides information on both inter‐echo and intra‐echo train phase shifts.
Abstract: In this paper, a series of diffusion-weighted fast spin-echo (FSE) sequences with a new motion correction scheme are introduced. This correction scheme is based on the navigator echo technique. Unlike conventional spin-echo imaging, motion correction for FSE is complicated by the phase oscillation between odd-numbered and even-numbered echoes and the complex phase relationship between spin echo and stimulated echo components. In our approach, incoherent phase shifting due to motion is monitored by consecutive acquisition of two navigator echoes, which provide information on both inter-echo and intra-echo train phase shifts. Applications to both phantom and in vivo studies are presented.