scispace - formally typeset
Search or ask a question
Author

Jeffrey Wayne Eberhard

Other affiliations: Metz
Bio: Jeffrey Wayne Eberhard is an academic researcher from General Electric. The author has contributed to research in topics: Tomosynthesis & Detector. The author has an hindex of 31, co-authored 114 publications receiving 2968 citations. Previous affiliations of Jeffrey Wayne Eberhard include Metz.


Papers
More filters
Journal ArticleDOI
TL;DR: A method is described for using a limited number of low-dose radiographs to reconstruct the three-dimensional distribution of x-rays attenuation in the breast, using x-ray cone-beam imaging, an electronic digital detector, and constrained nonlinear iterative computational techniques.
Abstract: A method is described for using a limited number (typically 10–50) of low-dose radiographs to reconstruct the three-dimensional (3D) distribution of x-ray attenuation in the breast. The method uses x-ray cone-beam imaging, an electronic digital detector, and constrained nonlinear iterative computational techniques. Images are reconstructed with high resolution in two dimensions and lower resolution in the third dimension. The 3D distribution of attenuation that is projected into one image in conventional mammography can be separated into many layers (typically 30–80 1-mm-thick layers, depending on breast thickness), increasing the conspicuity of features that are often obscured by overlapping structure in a single-projection view. Schemes that record breast images at nonuniform angular increments, nonuniform image exposure, and nonuniform detector resolution are investigated in order to reduce the total x-ray exposure necessary to obtain diagnostically useful 3D reconstructions, and to improve the quality of the reconstructed images for a given exposure. The total patient radiation dose can be comparable to that used for a standard two-view mammogram. The method is illustrated with images from mastectomy specimens, a phantom, and human volunteers. The results show how image quality is affected by various data-collection protocols.

392 citations

Patent
31 Mar 2004
TL;DR: In this paper, the projection images of an object of interest are acquired from different locations, such as by moving an X-ray source along an arbitrary imaging trajectory between emissions or by individually activating different x-ray sources located at different locations relative to the object in interest, and reconstructed to generate a 3D dataset representative of the object from which one or more volumes may be selected for visualization and display.
Abstract: Techniques are provided for generating three-dimensional images, such as may be used in mammography. In accordance with these techniques, projection images of an object of interest are acquired from different locations, such as by moving an X-ray source along an arbitrary imaging trajectory between emissions or by individually activating different X-ray sources located at different locations relative to the object of interest. The projection images may be reconstructed to generate a three-dimensional dataset representative of the object from which one or more volumes may be selected for visualization and display. Additional processing steps may occur throughout the image chain, such as for pre-processing the projection images or post-processing the three-dimensional dataset.

132 citations

Journal ArticleDOI
TL;DR: The algorithm can be shown to give rigorously accurate values for instantaneous frequency and outperform the Fourier transform approach in poor signal-to-noise environments.
Abstract: A new technique for determining the Doppler frequency shift in a phase-coherent pulsed Doppler system is presented. In the new approach, the Doppler frequency shift is given directly in the time domain in terms of the measured I and Q components of the measured Doppler signal. The algorithm is based on an expression for the instantaneous rate of change of phase which separates rapidly varying from slowly varying terms. It permits noise smoothing in each term separately. Since the technique relies solely on signal processing in the time domain, it is significantly simpler to implement than the classic Fourier transform approach. In addition, the algorithm can be shown to give rigorously accurate values for instantaneous frequency and outperform the Fourier transform approach in poor signal-to-noise environments. Experimental results are presented which confirm the superiority of the new domain technique.

111 citations

Journal ArticleDOI
TL;DR: This work imaged 0%, 50%, and 100% glandular-equivalent phantoms of varying thicknesses for a number of clinically relevant x-ray techniques on a digital mammography system and extracted mean signal and noise levels and computed calibration curves that can be used for quantitative tissue composition estimation.
Abstract: The healthy breast is almost entirely composed of a mixture of fatty, epithelial, and stromal tissues which can be grouped into two distinctly attenuating tissue types: fatty and glandular. Further, the amount of glandular tissue is linked to breast cancer risk, so an objective quantitative analysis of glandular tissue can aid in risk estimation. Highnam and Brady have measured glandular tissue composition objectively. However, they argue that their work should only be used for “relative” tissue measurements unless a careful calibration has been performed. In this work, we perform such a “careful calibration” on a digital mammography system and use it to estimate breast tissue composition of patient breasts. We imaged 0%, 50%, and 100% glandular-equivalent phantoms of varying thicknesses for a number of clinically relevant x-ray techniques on a digital mammography system. From these images, we extracted mean signal and noise levels and computed calibration curves that can be used for quantitative tissue composition estimation. In this way, we calculate the percent glandular composition of a patient breast on a pixelwise basis. This tissue composition estimation method was applied to 23 digital mammograms. We estimated the quantitative impact of different error sources on the estimates of tissue composition. These error sources include compressed breast height estimation error, residual scattered radiation, quantum noise, and beam hardening. Errors in the compressed breast height estimate contribute the most error in tissue composition—on the order of ±7% for a 4 cm compressed breast height. The spatially varying scattered radiation will contribute quantitatively less error overall, but may be significant in regions near the skinline. It is calculated that for a 4 cm compressed breast height, a residual scatter signal error is mitigated by approximately sixfold in the composition estimate. The error in composition due to the quantum noise, which is the limiting noise source in the system, is shown to be less than 1% glandular for most breasts.

110 citations

Book ChapterDOI
18 Jun 2006
TL;DR: This work presents a motivation for the generalized filtered backprojection (GFBP) approach to tomosynthesis reconstruction, which results in reconstructions with an image quality that is similar or superior to reconstructions that are mathematically optimal.
Abstract: Tomosynthesis reconstruction that produces high-quality images is a difficult problem, due mainly to the highly incomplete data. In this work we present a motivation for the generalized filtered backprojection (GFBP) approach to tomosynthesis reconstruction. This approach is fast (since non-iterative), flexible, and results in reconstructions with an image quality that is similar or superior to reconstructions that are mathematically optimal. Results based on synthetic data and patient data are presented.

105 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The various reconstruction algorithms used to produce tomosynthesis images, as well as approaches used to minimize the residual blur from out-of-plane structures are described.
Abstract: Digital x-ray tomosynthesis is a technique for producing slice images using conventional x-ray systems. It is a refinement of conventional geometric tomography, which has been known since the 1930s. In conventional geometric tomography, the x-ray tube and image receptor move in synchrony on opposite sides of the patient to produce a plane of structures in sharp focus at the plane containing the fulcrum of the motion; all other structures above and below the fulcrum plane are blurred and thus less visible in the resulting image. Tomosynthesis improves upon conventional geometric tomography in that it allows an arbitrary number of in-focus planes to be generated retrospectively from a sequence of projection radiographs that are acquired during a single motion of the x-ray tube. By shifting and adding these projection radiographs, specific planes may be reconstructed. This topical review describes the various reconstruction algorithms used to produce tomosynthesis images, as well as approaches used to minimize the residual blur from out-of-plane structures. Historical background and mathematical details are given for the various approaches described. Approaches for optimizing the tomosynthesis image are given. Applications of tomosynthesis to various clinical tasks, including angiography, chest imaging, mammography, dental imaging and orthopaedic imaging, are also described.

962 citations

Patent
30 Jul 1993
TL;DR: In this paper, a low-level model-free dynamic and static hand gesture recognition system utilizes either a 1-D histogram of frequency of occurrence vs. spatial orientation angle for static gestures or a 2-D space-time orientation histogram for dynamic gestures.
Abstract: A low-level model-free dynamic and static hand gesture recognition system utilizes either a 1-D histogram of frequency of occurrence vs. spatial orientation angle for static gestures or a 2-D histogram of frequency of occurrence vs. space-time orientation for dynamic gestures. In each case the histogram constitutes the signature of the gesture which is used for gesture recognition. For moving gesture detection, a 3-D space-time orientation map is merged or converted into the 2-D space-time orientation histogram which graphs frequency of occurrence vs. both orientation and movement. It is against this representation or template that an incoming moving gesture is matched.

605 citations

Journal ArticleDOI
O. Bonnefous1, P. Pesqué1
TL;DR: A new formulation is presented which describes the pulse-Doppler effect on the successive echoes from a cloud of moving targets as a progressive translation in time due to the displacement of the scatterers between two excitations.

406 citations

Journal ArticleDOI
TL;DR: The extensive research performed during the development of breast tomosynthesis is reviewed, with a focus on the research addressing the medical physics aspects of this imaging modality.
Abstract: Mammography is a very well-established imaging modality for the early detection and diagnosis of breast cancer. However, since the introduction of digital imaging to the realm of radiology, more advanced, and especially tomographic imaging methods have been made possible. One of these methods, breast tomosynthesis, has finally been introduced to the clinic for routine everyday use, with potential to in the future replace mammography for screening for breast cancer. In this two part paper, the extensive research performed during the development of breast tomosynthesis is reviewed, with a focus on the research addressing the medical physics aspects of this imaging modality. This first paper will review the research performed on the issues relevant to the image acquisition process, including system design, optimization of geometry and technique, x-ray scatter, and radiation dose. The companion to this paper will review all other aspects of breast tomosynthesis imaging, including the reconstruction process.

363 citations

Journal ArticleDOI
TL;DR: It was shown in both phantom imaging and patient imaging that the BP algorithm provided the best SDNR for low-contrast masses but the conspicuity of the feature details was limited by interplane artifacts; the FBP algorithms provided the highest edge sharpness for microcalcifications but the quality of masses was poor.
Abstract: Three algorithms for breast tomosynthesis reconstruction were compared in this paper, including (1) a back-projection (BP) algorithm (equivalent to the shift-and-add algorithm), (2) a Feldkamp filtered back-projection (FBP) algorithm, and (3) an iterative Maximum Likelihood (ML) algorithm. Our breast tomosynthesis system acquires 11 low-dose projections over a 50 degree angular range using an a-Si (CsI:Tl) flat-panel detector. The detector was stationary during the acquisition. Quality metrics such as signal difference to noise ratio (SDNR) and artifact spread function (ASF) were used for quantitative evaluation of tomosynthesis reconstructions. The results of the quantitative evaluation were in good agreement with the results of the qualitative assessment. In patient imaging, the superimposed breast tissues observed in two-dimensional (2D) mammograms were separated in tomosynthesis reconstructions by all three algorithms. It was shown in both phantom imaging and patient imaging that the BP algorithm provided the best SDNR for low-contrast masses but the conspicuity of the feature details was limited by interplane artifacts; the FBP algorithm provided the highest edge sharpness for microcalcifications but the quality of masses was poor; the information of both the masses and the microcalcifications were well restored with balanced quality by the ML algorithm, superior to the results from the other two algorithms.

355 citations