scispace - formally typeset
Search or ask a question

Showing papers on "Image quality published in 1994"


Proceedings ArticleDOI
24 Jul 1994
TL;DR: A new object-order rendering algorithm based on the factorization of a shear-warp factorization for perspective viewing transformations is described that is significantly faster than published algorithms with minimal loss of image quality.
Abstract: Several existing volume rendering algorithms operate by factoring the viewing transformation into a 3D shear parallel to the data slices, a projection to form an intermediate but distorted image, and a 2D warp to form an undistorted final image. We extend this class of algorithms in three ways. First, we describe a new object-order rendering algorithm based on the factorization that is significantly faster than published algorithms with minimal loss of image quality. Shear-warp factorizations have the property that rows of voxels in the volume are aligned with rows of pixels in the intermediate image. We use this fact to construct a scanline-based algorithm that traverses the volume and the intermediate image in synchrony, taking advantage of the spatial coherence present in both. We use spatial data structures based on run-length encoding for both the volume and the intermediate image. Our implementation running on an SGI Indigo workstation renders a 2563 voxel medical data set in one second. Our second extension is a shear-warp factorization for perspective viewing transformations, and we show how our rendering algorithm can support this extension. Third, we introduce a data structure for encoding spatial coherence in unclassified volumes (i.e. scalar fields with no precomputed opacity). When combined with our shear-warp rendering algorithm this data structure allows us to classify and render a 2563 voxel volume in three seconds. The method extends to support mixed volumes and geometry and is parallelizable.

1,249 citations


Journal ArticleDOI
TL;DR: Based on quantitative comparison, the chirp scaling algorithm provides image quality equal to or better than the precision range/Doppler processor, as defined by the system bandwidth.
Abstract: A space-variant interpolation is required to compensate for the migration of signal energy through range resolution cells when processing synthetic aperture radar (SAR) data, using either the classical range/Doppler (R/D) algorithm or related frequency domain techniques. In general, interpolation requires significant computation time, and leads to loss of image quality, especially in the complex image. The new chirp scaling algorithm avoids interpolation, yet performs range cell migration correction accurately. The algorithm requires only complex multiplies and Fourier transforms to implement, is inherently phase preserving, and is suitable for wide-swath, large-beamwidth, and large-squint applications. This paper describes the chirp scaling algorithm, summarizes simulation results, presents imagery processed with the algorithm, and reviews quantitative measures of its performance. Based on quantitative comparison, the chirp scaling algorithm provides image quality equal to or better than the precision range/Doppler processor. Over the range of parameters tested, image quality results approach the theoretical limit, as defined by the system bandwidth. >

897 citations


Journal ArticleDOI
TL;DR: It is shown both theoretically and experimentally that translations and rotations produce phase errors which are zero‐ and first‐order, respectively, in position.
Abstract: For diffusion-weighted magnetic resonance imaging and under circumstances where patient movement can be modeled as rigid body motion, it is shown both theoretically and experimentally that translations and rotations produce phase errors which are zero- and first-order, respectively, in position. Whlile a navigator echo can be used to correct the imaging data for arbitrary translations, only when the diffusion gradient is applied in the phase encode direction is there sufficient information to correct for rotations around all axes, and therefore for general rigid body motion. Experiments in test objects and human brain imaging confirm theoretical predictions and demonstrate that appropriate corrections dramatically improve image quality in vivo.

399 citations


Journal ArticleDOI
TL;DR: The theory of expectation-maximization can be used as a basis for calculation of objective figures of merit for image quality over a wide range of conditions in emission tomography.
Abstract: The expectation-maximization (EM) algorithm is an important tool for maximum-likelihood (ML) estimation and image reconstruction, especially in medical imaging. It is a non-linear iterative algorithm that attempts to find the ML estimate of the object that produced a data set. The convergence of the algorithm and other deterministic properties are well established, but relatively little is known about how noise in the data influences noise in the final reconstructed image. In this paper we present a detailed treatment of these statistical properties. The specific application we have in mind is image reconstruction in emission tomography, but the results are valid for any application of the EM algorithm in which the data set can be described by Poisson statistics. We show that the probability density function for the grey level at a pixel in the image is well approximated by a log-normal law. An expression is derived for the variance of the grey level and for pixel-to-pixel covariance. The variance increases rapidly with iteration number at first, but eventually saturates as the ML estimate is approached. Moreover, the variance at any iteration number has a factor proportional to the square of the mean image (though other factors may also depend on the mean image), so a map of the standard deviation resembles the object itself. Thus low-intensity regions of the image tend to have low noise. By contrast, linear reconstruction methods, such as filtered back-projection in tomography, show a much more global noise pattern, with high-intensity regions of the object contributing to noise at rather distant low-intensity regions. The theoretical results of this paper depend on two approximations, but in the second paper in this series we demonstrate through Monte Carlo simulation that the approximations are justified over a wide range of conditions in emission tomography. The theory can, therefore, be used as a basis for calculation of objective figures of merit for image quality.

388 citations


Journal ArticleDOI
TL;DR: Evaluated small-field-of-view, ultra-high-resolution pinhole collimation for a rotating-camera SPECT system that could be used to image small laboratory animals and in vitro image quality was evaluated using two rats.
Abstract: The objective of this investigation was to evaluate small-field-of-view, ultra-high-resolution pinhole collimation for a rotating-camera SPECT system that could be used to image small laboratory animals. Pinhole collimation offers distinct advantages over conventional parallel-hole collimation when used to image small objects. Since geometric sensitivity increases markedly for points close to the pinhole, small-diameter and high-magnification pinhole geometries may be useful for selected imaging tasks when used with large-field-of-view scintillation cameras. The use of large magnifications can minimize the loss of system resolution caused by the intrinsic resolution of the scintillation camera. A pinhole collimator has been designed and built that can be mounted on one of the scintillation cameras of a triple-head SPECT system. Three pinhole inserts with approximate aperture diameters of 0.6, 1.2 and 2.0 mm have been built and can be mounted individually on the collimator housing. When a ramp filter is used with a three-dimensional (3D) filtered backprojection (FBP) algorithm, the three apertures have in-plane SPECT spatial resolutions (FWHM) at 4 cm of 1.5, 1.9 and 2.8 mm, respectively. In-air point source sensitivities at 4 cm from the apertures are 0.9, 2.6 and 5.7 counts s(-1) microCi(-1) (24, 70 and 154 counts s(-1) MBq(-1)) for the 0.6, 1.2 and 2.0 mm apertures, respectively. In vitro image quality was evaluated with a micro-cold-rod phantom and a micro-Defrise phantom using both the 3D FBP algorithm and a 3D maximum likelihood-expectation maximization (ML-EM) algorithm. In vivo image quality was evaluated using two (315 and 325 g) rats. Ultra-high-resolution pinhole SPECT is an inexpensive and simple approach for imaging small animals that can be used with existing rotating-camera SPECT system.

319 citations


Journal ArticleDOI
TL;DR: Results indicate significant improvements in emission image quality using the Bayesian approach, in comparison to filtered backprojection, particularly when reprojections of the MAP transmission image are used in place of the standard attenuation correction factors.
Abstract: The authors describe conjugate gradient algorithms for reconstruction of transmission and emission PET images. The reconstructions are based on a Bayesian formulation, where the data are modeled as a collection of independent Poisson random variables and the image is modeled using a Markov random field. A conjugate gradient algorithm is used to compute a maximum a posteriori (MAP) estimate of the image by maximizing over the posterior density. To ensure nonnegativity of the solution, a penalty function is used to convert the problem to one of unconstrained optimization. Preconditioners are used to enhance convergence rates. These methods generally achieve effective convergence in 15-25 iterations. Reconstructions are presented of an /sup 18/FDG whole body scan from data collected using a Siemens/CTI ECAT931 whole body system. These results indicate significant improvements in emission image quality using the Bayesian approach, in comparison to filtered backprojection, particularly when reprojections of the MAP transmission image are used in place of the standard attenuation correction factors. >

302 citations


Journal ArticleDOI
01 Jun 1994
TL;DR: In this paper, the authors describe three approaches to the measurement of medical image quality: signal-to-noise ratio (SNR), subjective rating, and diagnostic accuracy, and consider in some depth recently developed methods for determining diagnostic accuracy of lossy compressed medical images and examine how good the easily obtainable distortion measures like SNR are at predicting the more expensive subjective and diagnostic ratings.
Abstract: Compressing a digital image can facilitate its transmission, storage, and processing. As radiology departments become increasingly digital, the quantities of their imaging data are forcing consideration of compression in picture archiving and communication systems (PACS) and evolving teleradiology systems. Significant compression is achievable only by lossy algorithms, which do not permit the exact recovery of the original image. This loss of information renders compression and other image processing algorithms controversial because of the potential loss of quality and consequent problems regarding liability, but the technology must be considered because the alternative is delay, damage, and loss in the communication and recall of the images. How does one decide if an image is good enough for a specific application, such as diagnosis, recall, archival, or educational use? The authors describe three approaches to the measurement of medical image quality: signal-to-noise ratio (SNR), subjective rating, and diagnostic accuracy. They compare and contrast these measures in a particular application, consider in some depth recently developed methods for determining diagnostic accuracy of lossy compressed medical images and examine how good the easily obtainable distortion measures like SNR are at predicting the more expensive subjective and diagnostic ratings. The examples are of medical images compressed using predictive pruned tree-structured vector quantization, but the methods can be used for any digital image processing that produces images different from the original for evaluation. >

277 citations


Journal ArticleDOI
TL;DR: A novel adaptive algorithm is presented that tailors the required amount of contrast enhancement based on the local contrast of the image and the observer's Just-Noticeable-Difference (JND) and offers considerable benefits in digital radiography applications where the objective is to increase the diagnostic utility of images.
Abstract: Existing methods for image contrast enhancement focus mainly on the properties of the image to be processed while excluding any consideration of the observer characteristics. In several applications, particularly in the medical imaging area, effective contrast enhancement for diagnostic purposes can be achieved by including certain basic human visual properties. Here the authors present a novel adaptive algorithm that tailors the required amount of contrast enhancement based on the local contrast of the image and the observer's Just-Noticeable-Difference (JND). This algorithm always produces adequate contrast in the output image, and results in almost no ringing artifacts even around sharp transition regions, which is often seen in images processed by conventional contrast enhancement techniques. By separating smooth and detail areas of an image and considering the dependence of noise visibility on the spatial activity of the image, the algorithm treats them differently and thus avoids excessive enhancement of noise, which is another common problem for many existing contrast enhancement techniques. The present JND-Guided Adaptive Contrast Enhancement (JGACE) technique is very general and can be applied to a variety of images. In particular, it offers considerable benefits in digital radiography applications where the objective is to increase the diagnostic utility of images. A detailed performance evaluation together with a comparison with the existing techniques is given to demonstrate the strong features of JGACE. >

256 citations


Journal Article
TL;DR: The resolution properties of pinhole SPECT are superior to those which have been achieved thus far with conventional SPECT or PET imaging technologies and provides an important approach for investigating localization properties of radiopharmaceuticals in vivo.
Abstract: UNLABELLED: The performance of pinhole SPECT and the application of this technology to investigate the localization properties of radiopharmaceuticals in vivo in small laboratory animals are presented. METHODS: System sensitivity and spatial resolution measurements of a rotating scintillation camera system are made for a low-energy pinhole collimator equipped with 1.0-, 2.0- and 3.3-mm aperture pinhole inserts. The spatial detail offered by pinhole SPECT for in vivo imaging was investigated in studies of the brain and heart in Fisher 344 rats by administering 201TICI, 99mTc-HMPAO, 99mTc-DTPA and 99mTc-MIBI. Image acquisition is performed using a rotating scintillation camera equipped with a pinhole collimator; projection data are acquired in conventional step-and-shoot mode as the camera is rotated 360 degrees around the subject. Pinhole SPECT images are reconstructed using a modified cone-beam algorithm developed from a two-dimensional fanbeam filtered backprojection algorithm. RESULTS: The reconstructed transaxial resolution of 2.8 mm FWHM and system sensitivity of 0.086 c/s/kBq with the 2.0-mm pinhole collimator aperture provide excellent spatial detail and adequate sensitivity for imaging the regional uptake of the radiopharmaceuticals in tumor, organs and other tissues in small laboratory animals. CONCLUSION: The resolution properties of pinhole SPECT are superior to those which have been achieved thus far with conventional SPECT or PET imaging technologies. Pinhole SPECT provides an important approach for investigating localization properties of radiopharmaceuticals in vivo.

215 citations


Proceedings ArticleDOI
27 Jun 1994
TL;DR: In this article, a subband coding scheme based on estimation and exploitation of just-noticeable-distortion (JND) profile is presented to maintain high image quality with low bit-rates.
Abstract: To maintain high image quality with low bit-rates, an effective coding algorithm should not only remove statistical correlation but also perceptual redundancy from image signals. A subband coding scheme based on estimation and exploitation of just-noticeable-distortion (JND) profile is presented. >

211 citations


Journal ArticleDOI
TL;DR: This technique makes use of the fact that, in most time-sequential imaging problems, the high-resolution image morphology does not change from one image to another, and it improves imaging efficiency over the conventional Fourier imaging methods by eliminating the repeated encodings of this stationary information.
Abstract: Many magnetic resonance imaging applications require the acquisition of a time series of images. In conventional Fourier transform based imaging methods, each of these images is acquired independently so that the temporal resolution possible is limited by the number of spatial encodings (or data points in the Fourier space) collected, or one has to sacrifice spatial resolution for temporal resolution. In this paper, a generalized series based imaging technique is proposed to address this problem. This technique makes use of the fact that, in most time-sequential imaging problems, the high-resolution image morphology does not change from one image to another, and it improves imaging efficiency (and temporal resolution) over the conventional Fourier imaging methods by eliminating the repeated encodings of this stationary information. Additional advantages of the proposed imaging technique include a reduced number of radio-frequency (RF) pulses for data collection, and thus lower RF power deposition. This method should prove useful for a variety of dynamic imaging applications, including dynamic studies of contrast agents and functional brain imaging.

Patent
30 Sep 1994
TL;DR: In this article, the authors proposed a method to reduce the quantization artifacts in the addition and removal of a digital watermark to and from a selected resolution image of a hierarchical image storage system where the watermark removal record is placed in a higher resolution image component.
Abstract: The system and method reduces the quantization artifacts in the addition and removal of a digital watermark to and from a selected resolution image of a hierarchical image storage system where the watermark removal record is placed in a higher resolution image component. For those applications where preserving the image quality of a higher resolution image component is more critical than preserving the image quality of a lower resolution image component, the low-resolution image is modified according to the teachings of the present invention to minimize and in many cases eliminate the quantization artifacts at the higher resolution component.

Journal ArticleDOI
TL;DR: An ultrasound synthetic aperture imaging method based on a monostatic approach was studied experimentally in this paper, where complex object data were recorded coherently in a 2D hologram using a 3.5 MHz single transducer with a fairly wide-angle beam.
Abstract: An ultrasound synthetic aperture imaging method based on a monostatic approach was studied experimentally. The proposed synthetic aperture method offers good dynamical resolution along with fast numerical reconstruction. In this study complex object data were recorded coherently in a two-dimensional hologram using a 3.5 MHz single transducer with a fairly wide-angle beam. Image reconstruction which applies the wavefront backward propagation method and the near-field curvature compensation was performed numerically in a microcomputer using the spatial frequency domain. This approach allows an efficient use of the FFT-algorithms. Because of the simple and fast scanning scheme and the efficient reconstruction algorithms the method can be made real-time. The image quality of the proposed method was studied by evaluating the spatial and dynamical resolution in a waterbath and in a typical tissue-mimicking phantom. The lateral as well as the range resolution (-6 dB) were approximately 1 mm in the depth range of 30-100 mm. The dynamical resolution could be improved considerably when the beam width was made narrower. Although it resulted in a slightly reduced spatial resolution this compromise has to be done for better resolution of low-contrast targets such as cysts. The study showed that cysts as small as 2 mm by diameter could be resolved. >

Journal ArticleDOI
TL;DR: HGI is presented, a new motion estimation method for low bit-rate image sequence coding that uses a hierarchical decomposition of the current image frame to describe scene motion in terms of displacement and deformation of variable-sized rectangular regions.
Abstract: Presents a new motion estimation method for low bit-rate image sequence coding that uses a hierarchical decomposition of the current image frame to describe scene motion in terms of displacement and deformation of variable-sized rectangular regions. Because the conventional blockmatching motion compensation method (BMA) can only cope with the problem of translational movement of the scene, some researchers have proposed deformable-block-based motion compensation such as control grid interpolation (CGI) and the triangle-based method (TBM). CGI begins with a spatial displacement for a small number of points in an image, termed control points. TBM partitions the image frame into triangular patches with equal size that are deformable during the motion compensation. These methods do not consider using the different motion characteristics and the shape properties of the moving objects, but instead apply the same motion estimation algorithm to every part of the scene which may be stationary or moving. The present authors propose a new motion compensation method called hierarchical grid interpolation (HGI) which segments a frame into various quadrangles with different sizes (described by quadtree segmentation) according to the motion activity. In the experiments, the authors compare HGI with the conventional BMA and TBM. HGI requires less computation, produces less prediction error, and requires lower transmission bit-rate. It also achieves a subjective image quality that is better than the conventional motion compensation coder. >

Journal ArticleDOI
TL;DR: Spectral characterization and spectral broadening measurements of commercially available AOTF's agree with theoretical predictions and reveal difficulties associated with imaging noncollimated light.
Abstract: We review the operating principles of noncollinear acousto-optic tunable filters (AOTF’s), emphasizing use of two orthogonally polarized beams for narrow-band imaging. Spectral characterization and the spectral broadening measurements of commercially available AOTF’s agree with theoretical predictions and reveal difficulties associated with imaging noncollimated light. An AOTF imaging spectropolarimeter for ground-based astronomy that uses CCD’s has been constructed at NASA Goddard Space Flight Center. It uses a TeO2 noncollinear AOTF and a simple optical relay assembly to produce side-by-side orthogonally polarized spectral images. We summarize the instrument design and initial performance tests. We include sample spectral images acquired at the Goddard Geophysical and Astronomical Observatory.

01 Jun 1994
TL;DR: There is a need for a more accessible magnet configuration to enable execution of various interventional procedures and MR compatibility of instruments and devices, therefore, needs to be addressed, as must the integration of therapy delivery modalities with the MR system.
Abstract: Among the currently available imaging techniques, magnetic resonance imaging (MRI) offers particular advantages for guiding, monitoring, and controlling diagnostic and therapeutic interventions, with particular appeal for most of the minimally invasive, minimal access approaches. The most obvious role of MRI is in monitoring and controlling a variety of interstitial ablative procedures, utilizing methods including thermal therapy (interstitial laser therapy, cryosurgery, focused ultrasound surgery). A fundamental requirement of MR monitoring is the implementation of pulse sequences with appropriate spatial and temporal resolution as well as overall image quality suitable for the dynamic imaging task. In addition, there is a need for a more accessible magnet configuration to enable execution of various interventional procedures. MR compatibility of instruments and devices, therefore, needs to be addressed, as must the integration of therapy delivery modalities with the MR system.

Journal ArticleDOI
TL;DR: The authors propose a new method called the "weighted line site" method, in which a prior structural image is employed in a modified updating scheme for the boundary variable used in the iterative Bayesian reconstruction of positron emission tomography images.
Abstract: Reports on a new method in which spatially correlated magnetic resonance (MR) or X-ray computed tomography (CT) images are employed as a source of prior information in the Bayesian reconstruction of positron emission tomography (PET) images This new method incorporates the correlated structural images as anatomic templates which can be used for extracting information about boundaries that separate regions exhibiting different tissue characteristics In order to avoid the possible introduction of artifacts caused by discrepancies between functional and anatomic boundaries, the authors propose a new method called the "weighted line site" method, in which a prior structural image is employed in a modified updating scheme for the boundary variable used in the iterative Bayesian reconstruction This modified scheme is based on the joint probability of structural and functional boundaries As to the structural information provided by CT or MR images, only those which have high joint probability with the corresponding PET data are used; whereas other boundary information that is not supported by the PET image is suppressed The new method has been validated by computer simulation and phantom studies The results of these validation studies indicate that this new method offers significant improvements in image quality when compared to other reconstruction algorithms, including the filtered backprojection method and the maximum likelihood approach, as well as the Bayesian method without the use of the prior boundary information >

Patent
25 Jan 1994
TL;DR: In this paper, the authors proposed a method for performing image compression that eliminates redundant and invisible image components using a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed.
Abstract: A method for performing image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

Journal ArticleDOI
TL;DR: It is concluded that fiducial markers such as stereotaxic Z frames that are not rigidly fixed to a patient's skull are inaccurate compared with other registration techniques, Talairach coordinate transformations provide surprisingly good registration, and minimizing the variance of MRI-MRI, PET-PET, or MRI-PET ratio images provides significantly better registration than all other techniques tested.
Abstract: Objective A variety of methods for matching intrasubject MRI-MRI, PET-PET, or MRI-PET image pairs have been proposed. Based on the rigid body transformations needed to align pairs of high-resolution MRI scans and/or simulated PET scans (derived from these MRI scans), we obtained general comparisons of four intrasubject image registration techniques: Talairach coordinates, head and hat, equivalent internal points, and ratio image uniformity. In addition, we obtained a comparison of stereotaxic Z frames with a customized head mold for MRI-MRI image pairs. Materials and methods and results Each technique was quantitatively evaluated using the mean and maximum voxel registration errors for matched voxel pairs within the brain volumes being registered. Conclusion We conclude that fiducial markers such as stereotaxic Z frames that are not rigidly fixed to a patient's skull are inaccurate compared with other registration techniques, Talairach coordinate transformations provide surprisingly good registration, and minimizing the variance of MRI-MRI, PET-PET, or MRI-PET ratio images provides significantly better registration than all other techniques tested. Registration optimization based on measurement of the similarity of spatial distributions of voxel values is superior to techniques that do not use such information.

Journal ArticleDOI
TL;DR: The results show that scattered radiation can reduce contrast significantly in portal films while deteriorating image quality only moderately in on-line systems, and the reduction in DSNR depends on the light collection efficiency and the noise characteristics of the TV camera.
Abstract: The physical characteristics of x rays scattered by the patient and reaching the imaging detector, as well as their effect on verification (portal) image quality, were investigated for megavoltage (0.1–20 MeV) x‐ray beams. Monte Carlo calculations and experimental measurements were used to characterize how the scatter and primary fluences at the detector plane were influenced by scattering geometry and the energy spectrum of the incident beam. The calculated scatter fluences were differentiated according to photon energy and scattering process. Scatter fractions were measured on a medical linear accelerator (Clinac 2100c, 6 MV) for a typical imaging geometry using an ionization chamber and a silicon diode. After correction for the energy dependence of the chamber and diode, the scatter fractions generated by the Monte Carlo simulations were found to be in excellent agreement with the measured results. In order to estimate the effect of scatter on image quality, the scatter and primary signals (i.e., energy deposited) produced in five different types of portal imaging detectors (lead plate/film, storage phosphor alone, lead plate/storage phosphor, Compton recoil‐electron detector, and a copper plate/Gd2O2S phosphor) were calculated. The results show that, for a specified geometry, the scatter fraction can vary by an order of magnitude, depending on the sensitivity of the imaging detector to low‐energy (<1 MeV) scattered radiation. For a common portal imaging detector (copper plate/Gd2O2S phosphor), the scattered radiation (i) reduced contrast by much as 50% for a fixed display‐contrast system, and (ii) decreased the differential‐signal‐to‐noise ratio (DSNR) by 10%–20% for a quantum‐noise‐limited portal imagingsystem. For currently available TV‐camera‐based portal imagingsystems, which have variable display contrast, the reduction in DSNR depends on the light collection efficiency and the noise characteristics of the TV camera. Overall, these results show that scattered radiation can reduce contrast significantly in portal films while deteriorating image quality only moderately in on‐line systems.

Journal ArticleDOI
TL;DR: The authors report here a scheme that yields the most efficient reconstruction without orthogonalization: projections are organized and accessed in a nominally multilevel fashion, and is better in image quality than the Fourier back-projection algorithm, at least for a smaller number of projections.
Abstract: The practical performance of algebraic reconstruction techniques (ART) for computed tomography (CT) depends heavily on the order in which the projections are considered. Complete orthogonalization, notwithstanding its theoretical justification, is not feasible because the computational time is prohibitive. The authors report here a scheme that yields the most efficient reconstruction without orthogonalization: projections are organized and accessed in a nominally multilevel fashion. Each level makes the best use of the image information reconstructed in the preceding levels. If the number of projections is a power of two. Then the access orders are exactly that for the 1D FFT. The authors' scheme can be easily implemented. Using it, one iteration of ART yields a high-quality image. Experimental results of this algorithm are demonstrated and compared with the results from the conventional sequential method and also random ordering. Comparisons show that this scheme is superior. ART is better in image quality than the Fourier back-projection algorithm, at least for a smaller number of projections. Since the authors have made it much more efficient in computational speed, ART could now find widespread use in medical imaging.

Patent
Reitan Ronald C1
13 May 1994
TL;DR: In this paper, a system, apparatus and method for testing the functional components of an electronic digital imaging system is described, which relies on a closed loop analysis to test system components by measuring a set of statistical image quality metrics.
Abstract: A system, apparatus and method for testing the functional components of an electronic digital imaging system is described. The system includes apparatus for image acquisition, storage, display, communication and printing. The system relies on a closed loop analysis to test system components by measuring a set of statistical image quality metrics. The expected set of statistics are in the form of special purpose features stored as a data set representative of an expected reference object. The closed loop analysis measures, for example, the quality of the printing component of the system by outputting a copy of the expected reference image, using the acquisition component to input the copy of the expected reference image, and then comparing the statistics against threshold values representative of an ideally operating component. The comparison of statistics against the threshold values provides a go/no-go measure of component performance and can indicate sources of system degradation.

Journal ArticleDOI
TL;DR: On the basis of subjective assessment of image quality and the computational efficiency of the algorithm, wavelet-base techniques appear promising for the compression of digitized radiographs.
Abstract: Image data compression is an enabling technology for teleradiology and picture archive and communication systems. Compression decreases the time and cost of image transmission and the requirements for image storage. Wavelets, discovered in 1987, constitute a new compression technique that has been described in radiologic publications but, to our knowledge, no previous studies of its use have been reported. The purpose of this study was to demonstrate the application of wavelet-based compression technology to digitized radiographs.Twelve radiographs with abnormal findings were digitized, compressed, and decompressed by using a new wavelet-based lossy compression algorithm. Images were compressed at ratios from 10:1 to 60:1. Seven board-certified radiologists reviewed images on a two-headed, high-resolution (2K x 2K) diagnostic workstation. Paired original and compressed/decompressed images were presented in random order. Reviewers adjusted contrast and magnification to judge whether image degradation was p...

Journal ArticleDOI
TL;DR: A multicolor fluorescence imaging system applied to medical diagnostics, which simultaneously records four fluorescence images in different wavelength bands, permitting low-resolution spectroscopy imaging.
Abstract: A multicolor fluorescence imaging system applied to medical diagnostics is described. The system presented simultaneously records four fluorescence images in different wavelength bands, permitting low-resolution spectroscopy imaging. An arithmetic function image of the four spectral images is constructed by a pixel-to-pixel calculation and is presented on a monitor in false-color coding. A sensitive detector is required for minimizing the excitation energy necessary to obtain an image and thus avoid side effects on the investigated tissue. Characteristics of the system of importance for the detector sensitivity as well as image quality are discussed. A high degree of suppression of ambient background light is reached with this system by the use of a pulsed laser as an excitation source together with gated detection. Examples of fluorescence images from tumors on the hind legs and in the brain of rats injected with Photofrin are given.

Journal ArticleDOI
TL;DR: A phase-diversity wave-front sensor has been developed and tested at the Lockheed Palo Alto Research Labs (LPARL), which consists of two CCD-array focal planes that record the best-focus image of an adaptive imaging system and an image that is defocused.
Abstract: A phase-diversity wave-front sensor has been developed and tested at the Lockheed Palo Alto Research Labs (LPARL). The sensor consists of two CCD-array focal planes that record the best-focus image of an adaptive imaging system and an image that is defocused. This information is used to generate an object-independent function that is the input to a LPARL-developed neural network algorithm known as the General Regression Neural Network (GRNN). The GRNN algorithm calculates the wave-front errors that are present in the adaptive optics system. A control algorithm uses the calculated values to correct the errors in the optical system. Simulation studies and closed-loop experimental results are presented.

Journal ArticleDOI
TL;DR: The results clearly demonstrate that, by modeling the imaging process and/or image degrading factors three-dimensionally, quantitative reconstruction and compensation methods provide the best image quality and quantitative accuracy.

Journal ArticleDOI
TL;DR: High-quality mammographic images enhance the radiologist's ability to interpret mammograms with high sensitivity for detecting abnormalities and high specificity for classifying lesions suspicious for malignancy.
Abstract: High-quality mammographic images enhance the radiologist's ability to interpret mammograms with high sensitivity for detecting abnormalities and high specificity for classifying lesions suspicious for malignancy. In addition to proper exposure, contrast, resolution, compression, and positioning, high-quality mammographic images must be accompanied by pertinent history and available comparison images. To avoid negating the benefits of technically ideal images, mammograms must be viewed under optimal viewing conditions. Constant attention to quality control, with every image evaluated for adherence to strict technical standards, is essential for maintaining image quality.

Patent
26 Jul 1994
TL;DR: In this paper, a video signal, replay by a video player, is subjected to image processing by a calculation control part of a video image searching device to calculate the image quality value and another feature value.
Abstract: A video signal, replay by a video player, is subjected to image processing by a calculation control part of a video image searching device to calculate the image quality value and another feature value. An image whose image quality satisfies a predetermined condition and whose feature value matches a predetermined condition is detected as an image in which an event has occurred. Such images are printed on an output paper together with associated addition information. A main control part of a video image access device reads out the additional information on the output paper by a scanner and effects control via a video control part to search the image corresponding to that information by a video player.

Journal ArticleDOI
TL;DR: Results show that transmitting shape information and allowing small position errors (geometrical distortions) avoids the mosquito and blocking artefacts of a block-oriented coder, and the reconstructed image of an object-oriented analysis-synthesis coder appears sharper compared to block- oriented hybrid coding.
Abstract: An object-oriented analysis-synthesis coder is presented concentrating on the optimal relationship of its components image analysis, image synthesis and parameter coding and on a comparison of its coding efficiency for block-oriented hybrid coding. As the block-oriented hybrid coder, the RM8 of the CCITT is used. The presented object-oriented analysis-synthesis coder is based on the source model of moving flexible 2D-objects and encodes arbitrarily shaped objects instead of rectangular blocks. The objects are described by three parameter sets defining their motion, shape and colour (colour parameters denoting luminance as well as chrominance values of the object surface). The parameter sets of each object are obtained by image analysis and coded by an object dependent parameter coding. Using the coded parameter sets, an image can be reconstructed by model-based image synthesis. Experimental results show that transmitting shape information and allowing small position errors (geometrical distortions) avoids the mosquito and blocking artefacts of a block-oriented coder. Furthermore, important image areas such as facial areas can be reconstructed with an image quality improvement up to 4 dB using the image analysis. As a whole, the reconstructed image of an object-oriented analysis-synthesis coder appears sharper compared to block-oriented hybrid coding. >

Journal ArticleDOI
TL;DR: A method of calculating numerically the optical transfer function appropriate to any type of image motion and vibration, including random ones, has been developed and an analytical approximation to the probability density function for random blur has been obtained.
Abstract: A method of calculating numerically the optical transfer function appropriate to any type of image motion and vibration, including random ones, has been developed. We compare the numerical calculation method to the experimental measurement; the close agreement justifies implementation in image restoration for blurring from any type of image motion. In addition, statistics regarding the limitation of resolution as a function of relative exposure time for low-frequency vibrations involving random blur are described. An analytical approximation to the probability density function for random blur has been obtained. This can be used for the determination of target acquisition probability. A comparison of image quality is presented for three different types of motion: linear, acceleration, and high-frequency vibration for the same blur radius. The parameter considered is the power spectrum of the picture.