scispace - formally typeset
Search or ask a question

Showing papers on "Edge enhancement published in 1992"


Journal ArticleDOI
TL;DR: In this paper, the authors describe a model of nonlinear image filtering for noise reduction and edge enhancement using anisotropic diffusion, which roughly corresponds to a nonlinear diffusion process with backward heat flow across the strong edges.
Abstract: The authors describe a model of nonlinear image filtering for noise reduction and edge enhancement using anisotropic diffusion. The method is designed to enhance not only edges, but corners and T junctions as well. The method roughly corresponds to a nonlinear diffusion process with backward heat flow across the strong edges. Such a process is ill posed, making the results depend strongly on how the algorithm differs from the diffusion process. Two ways of modifying the equations using simulations on a variable grid are studied. >

341 citations


Patent
12 Jun 1992
TL;DR: Improved spatial, temporal, and spatio-temporal noise reduction apparatus as discussed by the authors have been proposed for noise reduction in conjunction with edge enhancement, spatial interpolation, magnification adjustment by spatial interpolations, and dynamic range compression.
Abstract: Improved spatial, temporal, and spatio-temporal noise reduction apparatus (10) and also apparatus for noise reduction in conjunction with one or several of the following functions: edge enhancement, spatial interpolation, magnification adjustment by spatial interpolation, and dynamic range compression (DRC). An embodiment of apparatus (10) comprises an image preprocessing unit (12), an estimation gain parameter computation unit (14) and a two-directional processor (16).

216 citations


Journal ArticleDOI
TL;DR: This work demonstrates the existence of well‐defined minima of phase‐encode ghost noise for selected k‐space trajectories, examines the extent of blurring and edge enhancement artifacts, and shows how proper choice of FAISE sequence parameters can lead to proton density brain images which are practically indistinguishable from conventional spin‐echo protondensity images.
Abstract: The fast acquisition interleaved spin-echo (FAISE) method is a partial RF echo-planar technique which utilizes a specific phase-encode reordering algorithm to manipulate image contrast (Melki et al., J. Magn. Reson. Imaging 1:319, 1991). The technique can generate "spin-echo" like images up to 16 times faster than conventional spin-echo methods. However, the presence of T2 decay throughout the variable k-space trajectories used to manipulate T2 contrast ensures the presence of image artifacts, especially along the phase-encode direction. In this work, we experimentally and theoretically examine the type and extent of artifacts associated with the FAISE technique. We demonstrate the existence of well-defined minima of phase-encode ghost noise for selected k-space trajectories, examine the extent of blurring and edge enhancement artifacts, demonstrate the influence of matrix size and number of echoes per train on phase-encode artifact, and show how proper choice of FAISE sequence parameters can lead to proton density brain images which are practically indistinguishable from conventional spin-echo proton density images. A comparison of contrast between FAISE and standard spin-echo methods is presented in a companion article referred to as II.

65 citations


Journal ArticleDOI
TL;DR: In this paper, diffusion boundaries impermeable to water on a millisecond time scale distort the lineshape function of the observed frequency spectra from the transverse magnetization in a manner similar to motional narrowing in MR spectroscopy.

51 citations


Patent
05 Jun 1992
TL;DR: In this article, a magnetic resonance imaging system (MRI) is used to generate an image of a slice or other region of an examined subject and a smoothing filter smooths the generated image to create a smoothed or filtered image representation.
Abstract: A magnetic resonance imaging system (A) generates an image (16) of a slice or other region of an examined subject. A smoothing filter (B) smooths the generated image to create a smoothed or filtered image representation (30). The filtering of the image, unfortunately, tends to smooth or blur the edges. An edge detecting means (C 1 ) views the region around each sampled pixel of the filtered image to determine an amount of deviation in the pixel values. A large deviation indicates an edge; whereas, substantial homogeneity indicates the lack of an edge. Analogously, the direction of the maximum deviation is orthogonal to the direction of the edge. A plurality of soft edge directional filters (54) operate on the filtered image data to create a plurality of soft edge directionally filtered image representations (62). A plurality of hard edge directional filters (56) operate on the filtered image data to create a plurality of hard edge directionally filtered image representations (64). Preferably, the directional filtering is done at regular angular increments, e.g. every ten degrees. An edge enhanced final image representation (72) is created in which filtered pixels not adjacent an edge are assembled directly from the filtered image (30). Pixels which are adjacent an edge with a smaller (larger) rate of deviation are replaced by the corresponding pixel of the soft (hard) edge filtered image representation that was directionally filtered along a direction most nearly parallel to the determined edge direction.

45 citations


Patent
06 Apr 1992
TL;DR: In this article, the sharpness emphasis is carried out by multiplying a sharpness emphasizing signal of an original image by a function of a look-up table which is accessed based on an output from an edge detecting filter performing detections, in accordance with only the edge portions of the original image.
Abstract: According to this invention, a sharpness emphasis is carried out by multiplying a sharpness emphasis signal of an original image by a function of a look-up table which is accessed based on an output from an edge detecting filter performing detections, in accordance with only the edge portions of the original image. Since the edge detecting filter for detecting edges of the image is equipped in order to emphasize the sharpness only in the edge portions of the image, it is possible to prevent the image grains from being sharpness-emphasized. In addition, although the edge detecting filter may have a same size of a mask with that of the unsharpness mask, it is possible, if required, to modify freely both sizes of the masks, in order to perform a proper process in accordance with each image variation.

44 citations


Proceedings ArticleDOI
Keith T. Knox1
TL;DR: It is shown that the error image contains a linear component of the input image, which induces edge enhancement in the output error diffusion image.
Abstract: The concept of an error image is defined for the error diffusion algorithm. Ordinarily hidden from view, the error image is a visual representation of the internally generated errors from which the algorithm derives its name. In this paper, it is shown that the error image contains a linear component of the input image, which induces edge enhancement in the output error diffusion image. Examples are shown for three different error weight distributions: a 1-D one- ahead distribution, the standard 4-element distribution, and a 12-element error distribution. The amount of edge enhancement in the corresponding algorithm is shown to vary with the amount of input image information present in the error image.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

41 citations


Patent
21 Jan 1992
TL;DR: In this article, a method and apparatus for processing image data of dot-matrix/ink-jet printed text to perform Optical Character Recognition (OCR) of such image data is disclosed.
Abstract: Method and apparatus are disclosed for processing image data of dot-matrix/ink-jet printed text to perform Optical Character Recognition (OCR) of such image data. In the method and apparatus, the image data is viewed for detecting if dot-matrix/ink-jet printed text is present. Any detected dot-matrix/ink-jet produced text is then pre-processed by determining the image characteristic thereof by forming a histogram of pixel density values in the image data. A 2-D spatial averaging operation as a second pre-processing step smooths the dots of the characters into strokes and reduces the dynamic range of the image data. The resultant spatially averaged image data is then contrast stretched in a third pre-processing step to darken dark regions of the image data and lighten light regions of the image data. Edge enhancement is then applied to the contrast stretched image data in a fourth pre-processing step to bring out higher frequency line details. The edge enhanced image data is then binarized and applied to a dot-matrix/ink jet neural network classifier for recognizing characters in the binarized image data from a predetermined set of symbols prior to OCR.

39 citations


Journal ArticleDOI
01 Jan 1992
TL;DR: A unified treatment for optimal edge enhancement filters is presented and a new path metric for sequential search based on the linear model is discussed that forms the heart of an edge linking algorithm that combines edge elements enhanced by an optimal filter.
Abstract: Image segmentation techniques can be classified into edge‐based and region‐based approaches. The edge‐based approach is critically examined in this paper. Contours are detected in two stages; edge enhancement followed by edge linking. A unified treatment for optimal edge enhancement filters is presented and a new path metric for sequential search based on the linear model is discussed. This metric forms the heart of an edge linking algorithm (LINK) that combines edge elements enhanced by an optimal filter. From a starting node, transitions are made to the goal nodes by a maximum likelihood metric. Experimental results on test as well as real world scenes are presented to show the effectiveness of the LINK algorithm.

21 citations


Proceedings ArticleDOI
30 Aug 1992
TL;DR: This contribution describes the work-in-progress about MTF restoration with reduced noise enhancement, where the purpose of this type of image processing is to facilitate the observer tasks in manipulating small catheter tips in a noisy image.
Abstract: In digital diagnostic X-ray imaging the lowpass filtering of the image scene by the system MTF often has to be compensated. Edge enhancement techniques are widely applied, however, in fluoroscopy, where the images are quantum limited, the noise is also enhanced Classical edge enhancement techniques introduce correlated noise structures. These dynamic artefacts deteriorate the conspicuity of fine detail. This contribution describes the work-in-progress about MTF restoration with reduced noise enhancement. As the purpose of this type of image processing is to facilitate the observer tasks in manipulating small catheter tips in a noisy image, the final judgement belongs to this specialized group of experienced observers. The technical comparison, however, has to provide measures such as MTF enhancement, correlation between pixels, signal-to-noise ratios. >

16 citations


Journal ArticleDOI
TL;DR: A real-time image-processing scheme that uses selective erasure of spatial frequencies at the Fourier transform plane in an arrangement employing photorefractive two-beam coupling, which can perform spatial-filtering operations such as edge enhancement, bandpass filtering, and pattern recognition.
Abstract: We describe a real-time image-processing scheme that uses selective erasure of spatial frequencies at the Fourier transform plane in an arrangement employing photorefractive two-beam coupling. The versatility of the device results from the use of the Fourier transform of the erasure beam, which counterpropagates to the image-bearing beam. The technique can perform spatial-filtering operations such as edge enhancement, bandpass filtering, and pattern recognition by controlling the information available at the erasure-beam Fourier plane. An experimental demonstration has been made on edge enhancement, bandpass filtering, and character recognition.

Proceedings ArticleDOI
23 Mar 1992
TL;DR: A very- high-resolution image digitizing system that handles both still and motion images is presented, which can generate test sequences for super-high-definition images that have at least 2048*2048 pixels.
Abstract: A very-high-resolution image digitizing system that handles both still and motion images is presented. The system mainly consists of a high-resolution CCD line scanner and high-precision film transport equipment. Resolution characteristics are measured directly using test charts, and resolution of over 2048 pixels is obtained in both the vertical and horizontal direction. Therefore, this system can generate test sequences for super-high-definition images that have at least 2048*2048 pixels. Various corrections, such as shading-, gamma-, and color-correction and edge enhancement, are made, and good reproductions of original photo images are obtained. Sixteen standard images are digitized and prepared for coding simulations and other applications. >

01 Jan 1992
TL;DR: In this paper, a scale and orientation adaptive filtering strategy for images is presented, where the size, shape and orientation of the filter are signal controlled and thus locally adapted to each neighbourhood according to an estimated model.
Abstract: This paper contains a presentation of a scale and orientation adaptive filtering strategy for images. The size, shape and orientation of the filter are signal controlled and thus locally adapted to each neighbourhood according to an estimated model. On each scale the filter is constructed as a linear weighting of fixed oriented bandpass filters having the same shape but different orientations. The resulting filter is interpolated from all scale levels, and spans over more than 6 octaves. It is possible to reconstruct an enhanced original image from the filtered images. The performance of the reconstruction algorithm displays two desirable but normally contradictory features, namely edge enhancement and an improvement of the signal-to-noise ratio. The adaptive filtering method has been tested on both real data and synthesized test data. The results are very good on a wide variety of images from moderate signal-to-noise ratios to low, even lower than 0 dB, SNR.

Proceedings Article
07 Apr 1992
TL;DR: An overview on certain properties of new TV systems is given and upconversion techniques including edge enhancement algorithms for high quality flicker free HDTV displays are discussed.
Abstract: Deals with some important image processing methods for the TV display and the corresponding visual impression. To get a better understanding of TV display requirements to be redefined with new TV systems an overview on certain properties of new TV systems is given. Upconversion techniques including edge enhancement algorithms for high quality flicker free HDTV displays are discussed. Some conclusions for processing architectures are given. >

Proceedings ArticleDOI
11 Oct 1992
TL;DR: A new method for vessel size measurement which does deblurring, edge-preserving smoothing, and edge enhancement in one process and the efficacy of the method is demonstrated with the results of the synthetic images, phantom images, and real cineangiographic images.
Abstract: The vessels in the cineangiogram are degraded by the nonuniform point spread function (PSF) and nonstationary noise from the imaging system. The authors present a new method for vessel size measurement which does deblurring, edge-preserving smoothing, and edge enhancement in one process. The method is a version of an adaptive edge-preserving smoothing technique, adaptive mean field annealing (AMFA), extended to the blur problem. AMFA with a deblurring technique is an iterative image restoration technique for the restoration of noisy blurred images. With the progress of annealing, the restored image evolves from the maximum likelihood solution to the annealed maximum a posteriori solution and the restored edges are enhanced. The efficacy of the method is demonstrated with the results of the synthetic images, phantom images, and real cineangiographic images. >

Patent
02 Sep 1992
TL;DR: In this paper, a collection mode/collection speed setting circuit 14 is provided and the operator can arbitrarily set the collection mode and collection speed, and the parameter such as the recursive filter coefficient and the emphasis degree of edge enhancement, which the operator sets, is stored and the stored parameter is read, whereby the optimum parameter is automatically set.
Abstract: PURPOSE:To reduce the burden of an operator by referring to a parameter which is set in the past and easily obtaining the optimum parameter in a medical X-ray inspection system. CONSTITUTION:A collection mode/collection speed setting circuit 14 is provided and the operator can arbitrarily set a collection mode and collection speed. Furthermore, a parameter manual setting circuit 15 is provided and the operator can arbitrarily set a recursive filter coefficient and the emphasis degree of edge enhancement. Then, the parameter such as the recursive filter coefficient and the emphasis degree of edge enhancement, which the operator sets, is stored and the stored parameter as to the same combination when the collection mode and collection speed are set is read, whereby the optimum parameter is automatically set.

Dissertation
01 Jan 1992
TL;DR: In this paper, the image recovery problem is transformed to the problem of minimization of an energy function and a local update rule for each pixel point is then developed in a stepwise fashion and is shown to be a gradient descent rule for an associated global energy function.
Abstract: In this study, the principle of competitive learning is used to develop an iterative algorithm for image recovery and segmentation. Within the framework of Markov Random Fields (MRF), the image recovery problem is transformed to the problem of minimization of an energy function. A local update rule for each pixel point is then developed in a stepwise fashion and is shown to be a gradient descent rule for an associated global energy function. Relationship of the update rule to Kohonen's update rule is shown. Quantitative measures of edge preservation and edge enhancement for synthetic images are introduced. Simulation experiments using this algorithm on real and synthetic images show promising results on smoothing within regions and also on enhancing the boundaries. Restoration results are compared with recently published results using the mean field approximation. Segmentation results using the proposed approach are compared with edge detection using fractal dimension, edge detection using mean field approximation, and edge detection using the Sobel operator. Edge points obtained by using these techniques are combined to produce edge maps which include both hard and soft edges.

Proceedings ArticleDOI
01 Feb 1992-Robotics
TL;DR: This paper develops a linking algorithm for the combination of edge elements enhanced by an optimal filter based on sequential search and shows that the metric described here is very easy to implement and provides more accurate results.

Journal ArticleDOI
T.I. Cho1, Kyu Ho Park1
TL;DR: In this paper, a new hexagonal edge relaxation method is described, where the vertex types of an edge are simple and their classification is reasonable, therefore the overall enhancement of edges is more reliable than the Prager square edge relaxation.
Abstract: It has been found that hexagonal spatial sampling yields smaller quantisation errors. A new hexagonal edge relaxation method is described. In this method, the vertex types of an edge are simple and their classification is reasonable, therefore the overall enhancement of edges is more reliable than the Prager square edge relaxation method [1].

Proceedings ArticleDOI
28 Oct 1992
TL;DR: An edge enhancement algorithm is introduced to overcome the third shortcoming of the error diffusion method to print documents through a medium resolution printer: the image is too dark to distinguish the details, the text exhibits a coarse edge.
Abstract: Using the error diffusion method to print documents through a medium resolution (typically between 300 dpi and 600 dpi) printer has three drawbacks: (1) the snake-like noise disturbs the white background; (2) the printed image is too dark to distinguish the details; (3) the printed text exhibits a coarse edge. In this paper, the authors propose some methods to overcome these shortcomings. In order to eliminate the first shortcoming, a white minimum threshold value is set to calibrate the diffused errors. For the second shortcoming, a tone-scale adjustment function is used to lighten the images. Finally, an edge enhancement algorithm is introduced to overcome the third shortcoming.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: This paper considers a simple two-dimensional IIR filter based on the Fornasini-Marchesini local state-space (LSS) model and shows that if image processing is carried out using such filters from four directions, smoothing, edge detection or edge enhancement can be achieved without any distortion.
Abstract: This paper considers a simple two-dimensional (2-D) IIR filter based on the Fornasini-Marchesini local state-space (LSS) model. It is shown that if image processing is carried out using such filters from four directions, smoothing, edge detection or edge enhancement can be achieved without any distortion. The proposed technique allows one to flexibly perform the forementioned filtering by choosing three parameters. Moreover, filter analysis including stability is easy due to using a well-known 2-D LSS model. Finally, some examples are given to illustrate the utility of the proposed technique.

Journal ArticleDOI
TL;DR: In this article, two amorphous silicon/ferroelectric liquid crystal devices have been used for real-time edge enhancement of images using a flexoelectric effect and a phase conjugation scheme.

Proceedings ArticleDOI
TL;DR: A photoreceptor lateral interaction network, Grossberg's shunting neural network, and a novel modified version of the latter are compared in their effect on spatial nonuniformity noise and edge enhancement.
Abstract: Research of vertebrate and invertebrate vision systems has revealed them to be remarkable assemblies of simple cells performing collectively various image processing and analysis tasks Among these are counted edge enhancement, noise suppression, dynamic range compression, and motion and object orientation detection These functions are achieved due to the massively parallel structure of these systems and appropriate non-linear inter-cell interactions, among them lateral inhibition The high degree of connectivity existent in the vertebrate retina is currently beyond reach of integrated implementations; however, even its approximations applied to focal plane arrays can result in enhanced and more sophisticated performance These approximations are discussed mathematically by means of methods developed for analysis of neural networks A photoreceptor lateral interaction network, Grossberg's shunting neural network, and a novel modified version of the latter are compared in their effect on spatial nonuniformity noise and edge enhancement These two qualities are of special interest in the case of infrared imaging The modified shunting network combines an adaptive lateral signal spread amongst photodetectors with non-linear, multiplicative lateral inhibition The first effect serves to reduce the effects of spatial noise, while the second, by its differentiating nature, removes low spatial frequencies and enhances high spatial frequency components inherent to the image© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering Downloading of the abstract is permitted for personal use only

Proceedings ArticleDOI
01 Jun 1992
TL;DR: In this paper, a new filter is proposed which shares the advantages of both neural network for deconvolution and advanced nonlinear filtering for noise removal and edge enhancement for image restoration.
Abstract: A conventional gamma camera is used for the external imaging of bremsstrahlung generated from pure beta-emitters such as phosphorous-32 (32P). Tomographic images of a cylindrical phantom filled with water and containing four cylindrical sources of varying diameter are recorded using two collimators with symmetrical aperture configuration but different bore-lengths. The resolution of the system is comparable to single photon emitters for both collimaters; FWHM approximately 1.8 cm and FWTM approximately 2.9 cm. An effective linear attenuation coefficient of 0.14 cm-1 for 32P, calculated from isolated spherical sources in water, is used with the post-reconstruction Chang algorithm to correct the tomographic images. The use of a broad energy window and the symmetric apertures of the collimators yields an approximately radially symmetric, shift invariant, and stationary point-spread-function with distance from the collimator face as required for the use of image restoration filters. A new filter is proposed which shares the advantages of both neural network for deconvolution and advanced nonlinear filtering for noise removal and edge enhancement. The new filter compares favorably with the Wiener for image restoration and improves the conditions for quantitative measurements with the gamma camera. In addition, its application for image restoration does not require the knowledge of the object and noise power spectra and the serious problems (ring effects and noise overriding) associated with the inverse operation encountered in the Wiener filter are avoided.

Journal ArticleDOI
TL;DR: An optical-scanning-based imaging system, useful for directly acquiring the directional gradient and/or the Laplacian of an image, and synchronous temporal detection to detect gradients in a direct fashion is proposed.
Abstract: We propose an optical-scanning-based imaging system, useful for directly acquiring the directional gradient and/or the Laplacian of an image. Such operations are crucial in edge detection, edge enhancement, and feature extraction. Digital edge-detection techniques tend to be computationally intensive as well as noise sensitive, as these derivative-based operators essentially constitute high-pass filters. Since we employ synchronous temporal detection to detect gradients in a direct fashion, noise suppression is greatly enhanced. Proof-of-principle results demonstrate the feasibility of our technique.

Proceedings ArticleDOI
TL;DR: It is shown that the integration of a priori information in the LINK algorithms provides faster and more accurate edge linking.
Abstract: This research presents an approach to integrate a priori information to the path metric of the LINK algorithm. The zero-crossing contours of the $DEL2G are taken as a gross estimate of the boundaries in the image. This estimate of the boundaries is used to define the swath of important information, and to provide a distance measure for edge localization. During the linking process, a priori information plays important roles in (1) dramatically reducing the search space because the actual path lies within +/- 2 (sigma) f from the prototype contours ((sigma) f is the standard deviation of the Gaussian kernel used in the edge enhancement step); (2) breaking the ties when the search metrics give uncertain information; and (3) selecting the set of goal nodes for the search algorithm. We show that the integration of a priori information in the LINK algorithms provides faster and more accurate edge linking.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
K.H. Hedengren1
30 Aug 1992
TL;DR: An implementation method for edge enhancement algorithms, or other processes based on convolutions, through the application of image algebra, which is effective for software implementation as algorithms typically execute faster than implementations based on conventional convolutions.
Abstract: Illustrates an implementation method for edge enhancement algorithms, or other processes based on convolutions, through the application of image algebra. Standard convolution kernels are decomposed into orthogonal components to identify a small set of basis images that are combined to perform the desired functions. With this method, algorithms can easily be modified as they are not limited by the summation of fixed constants in convolution kernels. Though results are shown on a pixel basis, there is no effort in this paper to evaluate the relative performances of various algorithms. However, the image algebra approach naturally illuminates the fundamental properties of different algorithms. The approach is effective for software implementation as algorithms typically execute faster than implementations based on conventional convolutions. >

Patent
02 Oct 1992
TL;DR: In this paper, a Gaussian filter whose spread depends on variations in image intensity of surrounding pixels is applied to the image data using a variably filled mask of filter values, and the filters may be Gaussians whose spreads are inversely proportionate to variations in input image intensity plus a constant.
Abstract: Method and apparatus for image enhancement involving smoothing and thinning input image data, segmenting adjacent characters in the smoothed and thinned data, and identifying a segmented character based on a comparison of the segmented character to a dictionary of characters. Smoothing and thinning may be provided by applying to each pixel of input image data a filter whose spread depends on variations in image intensity of surrounding pixels to obtain filtered image data, deriving inverted image data from first difference data of the filtered image data and multiplying the inverted image data and the filtered image data to obtain smoothed and thinned image data. The filters may be Gaussians whose spreads are inversely proportionate to variations in input image intensity plus a constant and the filters may be applied to the image data using a variably filled mask of filter values.

Proceedings ArticleDOI
01 Apr 1992
TL;DR: The experimental results show that MEEF outperforms independent edge enhancing filter in improving the degraded edges caused by unfocusing, interlaced scanning and channel dispersion.
Abstract: In this paper, a multichannel edge enhancing filter (MEEF) applied to color image processing is proposed to improve the degraded edges in color TV picture which are classified as the following cases: the blurred edges caused by unfocusing or fast shift of camera, the serrated edges caused by interlaced scanning, and the chroma-cross edges caused by channel dispersion Because of a high between-channel dependence in color image, individual edge enhancement in each channel is not a natural method without utilizing between-channel dependence. Thus, this technique is not recommended in color image processing. The proposed MEEF is successfully used in dealing with this sort of the degraded edges of multichannel image. The root signals and edge enhancement properties of MEEF are investigated in detail. The experimental results show that MEEF outperforms independent edge enhancing filter in improving the degraded edges caused by unfocusing, interlaced scanning and channel dispersion.

Journal ArticleDOI
R.-D. Müller, M. Voß, V. John, P. Gocke, H. Kuhn1, E. Löhr 
TL;DR: According to the initial results dual energy subtraction imaging in one-shot-technique seems to be useful in the diagnostics of skeletal lesions and especially pulmonary nodules.
Abstract: Digital luminescent radiography enables dual-energy subtraction imaging, because this computed system allows susbtraction of imaging data and image post processing, as for example special windowing or edge enhancement. In a special cassette a copper filter is placed between two imaging plates for energy separation by a single X-ray exposure. Image post-processing with subtraction of imaging data permits the elimination of either skeleton or soft tissue structures. The influence of filter thickness, tube voltage and the X-ray exposure dosage on image quality is examined by the use of an anthropomorphic quality is examined by the use of an anthropomorphic phantom of the chest. According to our initial results dual energy subtraction imaging in one-shot-technique seems to be useful in the diagnostics of skeletal lesions and especially pulmonary nodules.