scispace - formally typeset
Search or ask a question

Showing papers on "Image quality published in 1991"


Proceedings ArticleDOI
01 Jul 1991
TL;DR: This paper introduces a projection approach for directly rendering rectilinear, parallel-projected sample volumes that takes advantage of coherence across cells and the identical shape of their projection and considers the repercussions of various methods of integration in depth and interpolation across the scan plane.
Abstract: Direct volume rendering offers the opportunity to visualize all of a three-dimensional sample volume in one image. However, processing such images can be very expensive and good quality high-resolution images are far from interactive. Projection approaches to direct volume rendering process the volume region by region as opposed to ray-casting methods that process it ray by ray. Projection approaches have generated interest because they use coherence to provide greater speed than ray casting and generate the image in a layered, informative fashion. This paper discusses two topics: First, it introduces a projection approach for directly rendering rectilinear, parallel-projected sample volumes that takes advantage of coherence across cells and the identical shape of their projection. Second, it considers the repercussions of various methods of integration in depth and interpolation across the scan plane. Some of these methods take advantage of Gouraud-shading hardware, with advantages in speed but potential disadvantages in image quality.

268 citations


Journal ArticleDOI
TL;DR: Calculations of both the full width at half-maximum and the shape of the profiles were in good agreement with experimental results, and the effect of the widened profiles, in particular of their extended tail ends, on image quality is demonstrated in phantom measurements.
Abstract: CT scanning in spiral geometry is achieved by continuously transporting the patient through the gantry in synchrony with continuous data acquisition over a multitude of 360-deg scans. Data for reconstruction of images in planar geometry are estimated from the spiral data by interpolation. The influence of spiral scanning on image quality is investigated. Most of the standard physical performance parameters, e.g., spatial resolution, image uniformity, and contrast, are not affected; results differ for pixel noise and slice sensitivity profiles. For linear interpolation, pixel noise is expected to be reduced by a factor of 0.82; reduction factors of 0.81 to 0.83 were measured. Slice sensitivity profiles are changed as a function of table feed d, measured in millimeters per 360-deg scan; they are smoothed as the original profile is convolved with the object motion function. The motion function is derived for linear interpolation that constitutes a triangle with a base line width of 2d and a maximal height equal to 1/d. Calculations of both the full width at half-maximum and the shape of the profiles were in good agreement with experimental results. The effect of the widened profiles, in particular of their extended tail ends, on image quality is demonstrated in phantom measurements.

247 citations


Journal Article
TL;DR: ANALYZE offers the potential to accurately and reproducibly examine, from images, the structure and function of any cell, tissue, limb, organ or organ system of the body, much like a surgeon or pathologist might do in real life, but entirely non-invasively, without pain or destruction of tissue.
Abstract: A comprehensive software system called ANALYZE has been developed which permits detailed investigation and evaluation of 3-D biomedical images. The software can be used with any 2-D or 3-D imaging modality, including x-ray computed tomography, radionuclide emission tomography, ultrasound tomography, magnetic resonance imaging and both light and electron microscopy. The package is unique in its synergistic integration of fully interactive modules for direct display, manipulation and measurement of multidimensional image data. Several original algorithms are included which improve image display efficiency and quality. One of the most versatile and powerful algorithms is interactive volume rendering, which is optimized to be fast without compromising image quality. An important advantage of this technique is to display 3-D images directly from the original data and to provide on-the-fly combinations of selected image transformations and/or volume set operations (union, intersection, difference, etc.). The inclusion of a variety of interactive editing and quantitative mensuration tools significantly extends the usefulness of the software. Any curvilinear path or region-of-interest can be manually specified and/or automatically segmented for numerical determination and statistical analyses of distances, areas, volumes, shapes, densities and textures. ANALYZE is written entirely in "C" and runs on several standard UNIX workstations. It is being used in a variety of applications by over 40 institutions around the world, and has been licensed by Mayo to several imaging companies. The software architecture permits systematic enhancements and upgrades which has fostered development of a readily expandable package. ANALYZE comprises a powerful "visualization workshop" for rapid prototyping of specific application packages, including applications to interactive surgery simulation and radiation treatment planning. ANALYZE offers the potential to accurately and reproducibly examine, from images, the structure and function of any cell, tissue, limb, organ or organ system of the body, much like a surgeon or pathologist might do in real life, but entirely non-invasively, without pain or destruction of tissue. These capabilities promise exciting new insights into the basic processes of life, and major advances in health care delivery through improved diagnosis and treatment of disease.

194 citations


Journal ArticleDOI
TL;DR: The results indicate that excellent quantitative integrity can be achieved when these straightforward artifact reduction techniques are employed and are used to reduce veiling glare, pincushion distortion, dc bias, and residual shading effects.
Abstract: Image intensifier–television–video digitizer (IITVD) systems are commonly used for digital planar image acquisition in radiology. However, the well‐known distortions inherent in these systems limit their utility in research and in some clinical applications where quantitatively correct images are required. Software correction techniques have been implemented which restore both the spatial and grey scale quantitative integrity, allowing IITVD systems to be used as analytical research tools. Previously reported and novel correction techniques were used to reduce veiling glare, pincushion distortion, dc bias, and residual shading effects. The results indicate that excellent quantitative integrity can be achieved when these straightforward artifact reduction techniques are employed.

117 citations


Journal ArticleDOI
TL;DR: An iterative reconstruction method which minimizes the effects of ill-conditioning is discussed and a regularization method which integrates prior information into the image reconstruction was developed which improves the conditioning of the information matrix in the modified Newton-Raphson algorithm.
Abstract: An iterative reconstruction method which minimizes the effects of ill-conditioning is discussed. Based on the modified Newton-Raphson algorithm, a regularization method which integrates prior information into the image reconstruction was developed. This improves the conditioning of the information matrix in the modified Newton-Raphson algorithm. Optimal current patterns were used to obtain voltages with maximal signal-to-noise ratio (SNR). A complete finite element model (FEM) was used for both the internal and the boundary electric fields. Reconstructed images from phantom data show that the use of regularization optimal current patterns, and a complete FEM model improves image accuracy. The authors also investigated factors affecting the image quality of the iterative algorithm such as the initial guess, image iteration, and optimal current updating. >

114 citations


Journal ArticleDOI
TL;DR: In this paper, the detection quantum efficiency (DQE) as determined by the pulse height distribution and the point spread function (PSF) are estimated, depending on energy, scintillator thickness and numerical aperture.

98 citations


Proceedings ArticleDOI
14 Apr 1991
TL;DR: The goal of this research is to develop interpolation techniques which preserve or enhance the local structure critical to image quality, and preliminary results are presented which exploit either the properties of vision or the property of the image in order to achieve the goals.
Abstract: The goal of this research is to develop interpolation techniques which preserve or enhance the local structure critical to image quality. Preliminary results are presented which exploit either the properties of vision or the properties of the image in order to achieve the goals. Directional image interpolation is considered which is based on a local analysis of the spatial image structure. The extension of techniques for the design of linear filters based on properties of human perception reported previously to enhance the perceived quality of interpolated images is considered. >

92 citations


Proceedings ArticleDOI
01 Jun 1991
TL;DR: Rather than studying perceptually lossless compression, research must carry out research to determine what types of lossy transformations are least disturbing to the human observer.
Abstract: Several topics connecting basic vision research to image compression and image quality are discussed: (1) A battery of about 7 specially chosen simple stimuli should be used to tease apart the multiplicity of factors affecting image quality. (2) A 'perfect' static display must be capable of presenting about 135 bits/min2. This value is based on the need for 3 pixels/min and 15 bits/pixel. (3) Image compression allows the reduction from 135 to about 20 bits/min2 for perfect image quality. 20 bit/min2 is the information capacity of human vision. (4) A presumed weakness of the JPEG standard is that it does not allow for Weber's Law nonuniform quantization. We argue that this is an advantage rather than a weakness. (5) It is suggested that all compression studies should report two numbers separately: the amount of compression achieved from quantization and the amount from redundancy coding. (6) The DCT, wavelet and viewprint representations are compared. (7) Problems with extending perceptual losslessness to moving stimuli are discussed. Our approach of working with a 'perfect' image on a 'perfect' display with 'perfect' compression is not directly relevant to the present situation with severely limited channel capacity. Rather than studying perceptually lossless compression we must carry out research to determine what types of lossy transformations are least disturbing to the human observer. Transmission of 'perfect', lossless images will not be practical for many years.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

90 citations


Journal ArticleDOI
TL;DR: Modifications to her original acceleration algorithm are introduced, which involve extensions in considering truncated data and an alternative way of implementing the search for an optimal step size.
Abstract: Maximum-likelihood image restoration for noncoherent imagery, which is based on the generic expectation maximization (EM) algorithm of Dempster et al. [ J. R. Stat. Soc. B39, 1 ( 1977)], is an iterative method whose convergence can be slow. We discuss an accelerative version of this algorithm. The EM algorithm is interpreted as a hill-climbing technique in which each iteration takes a step up the likelihood functional. The basic principle of the acceleration technique presented is to provide larger steps in the same vector direction and to find some optimal step size. This basic line-search principle is adapted from the research of Kaufman [ IEEE Trans. Med. Imag.MI-6, 37 ( 1987)]. Modifications to her original acceleration algorithm are introduced, which involve extensions in considering truncated data and an alternative way of implementing the search for an optimal step size. Log-likelihood calculations and reconstructed images from simulations show the execution time’s being shortened from the nonaccelerated algorithm by approximately a factor of 7.

85 citations


Journal ArticleDOI
TL;DR: The results indicate that phase aberrations significantly degrade breast image quality for typical transducer frequencies and sizes.

83 citations


Journal ArticleDOI
TL;DR: A combination of Smith prediction and optimal signal estimation that allows a system of several simulated interacting gaze-holding controls to act together coherently, despite delays and shared output variables, to produce zero-latency tracking of a moving target is described.
Abstract: Vision systems that hold their gaze on a visual target using binocular, maneuverable computer vision hardware are discussed The benefits of gaze holding are identified, and the role of binocular cues and vergence in implementing a gaze-holding system is addressed A combination of Smith prediction and optimal signal estimation that allows a system of several simulated interacting gaze-holding controls to act together coherently, despite delays and shared output variables, to produce zero-latency tracking of a moving target is described It is shown how tracking the object increases its signal by improving its image quality and decreases the signal of competing objects by decreasing their image quality The implementation of a subset of the gaze-holding capabilities (gross vergence and smooth tracking) on real-time computer vision hardware is described >

Patent
05 Aug 1991
TL;DR: In this paper, a variable lowpass filter (blur) operation on block boundaries that is based on the coefficients of the transformed data is used to reduce the artifacts of block transform image compression.
Abstract: A method of improving image quality when using block transform image compression algorithms by applying a variable lowpass filter (blur) operation on block boundaries that is based on the coefficients of the transformed data. The method of reducing block artifacts results from adaptively blurring the block boundaries based on the frequency content of the blocks. Low frequency blocks are heavily blurred, while high frequency blocks should have very little blur.

Proceedings ArticleDOI
01 Jun 1991
TL;DR: In this article, the vector quantization algorithm proposed by Equitz was applied to the problem of efficiently selecting colors for a limited image palette, and the algorithm performed the quantization by merging pairwise nearest neighbor (PNN) clusters.
Abstract: We apply the vector quantization algorithm proposed by Equitz to the problem of efficiently selecting colors for a limited image palette. The algorithm performs the quantization by merging pairwise nearest neighbor (PNN) clusters. Computational efficiency is achieved by using k- dimensional trees to perform fast PNN searches. In order to reduce the number of initial image colors, we first pass the image through a variable-size cubical quantizer. The centroids of colors that fall in each cell are then used as sample vectors for the merging algorithm. Tremendous computational savings is achieved from this initial step with very little loss in visual quality. To account for the high sensitivity of the human visual system to quantization errors in smoothly varying regions of an image, we incorporate activity measures both at the initial quantization step and at the merging step so that quantization is fine in smooth regions and coarse in active regions. The resulting images are of high visual quality. The computation times are substantially smaller than that of the iterative Lloyd-Max algorithm and are comparable to a binary splitting algorithm recently proposed by Bouman and Orchard.

Journal ArticleDOI
W.J. Welsh1
TL;DR: Two model-based coding techniques are described which are capable of producing either improved picture quality at bit-rates around 64 kbit/s or acceptable picture quality in videophone and videoconference systems operating at low bit-rate.
Abstract: The requirement for improved picture quality in videophone and videoconference systems operating at low bit-rates has stimulated interest in model-based image coding. In this paper, two model-based coding techniques are described which are capable of producing either improved picture quality at bit-rates around 64 kbit/s or acceptable picture quality at bit-rates far lower than 64 kbit/s. The first technique produces facial expressions by using feature code-books; the second technique produces facial expressions by distorting an underlying three-dimensional model. The problems of image analysis and synthesis, which are concomitant in model-based coding, are discussed.

Journal ArticleDOI
TL;DR: In this article, the image quality obtainable with shift-and-add (SAA) imaging for the recovery of diffraction-limited information is quantitatively investigated using data simulating an 8m aperture at a site with near-IR seeing conditions as found on Mauna Kea.
Abstract: The image quality obtainable with shift-and-add (SAA) imaging for the recovery of diffraction-limited information is quantitatively investigated using data simulating an 8-m aperture at a site with near-IR seeing conditions as found on Mauna Kea. This is compared to the image quality obtainable from image centroiding and wavefront tip-tilt correction. For good seeing conditions, image centroiding and SAA, which tracks the image peak, show similar performance containing about 30 percent of the image power in a diffraction-limited component. However, as the seeing degrades, SAA consistently yields improved resolution maintaining significantly greater diffraction-limited information. This enhances the detection threshold by about 2 magnitudes over the seeing-limited case for the seeing range studied. By comparison, the gain due to image centroiding decreases to less than 1 magnitude at the same poor seeing limit.

Journal ArticleDOI
TL;DR: The LTS1 Reference LTS-ARTICLE-1991-001 describes the construction and operation of the Large Hadron Collider and some of the fundamental mechanisms behind its construction.
Abstract: A compression technique based on a pyramidal expansion is presented. The elementary functions of the expansion form a class of pyramidal Gabor functions partitioning the frequency domain into different regions. Still images and moving image sequences are coded by Selecting and quantizing the coefficients in this expansion. Computer simulation of the proposed method show good quality reconstruction of both still images and image sequences.

Journal ArticleDOI
TL;DR: An optimisation system for aperture size and ultrasonic frequency is proposed with signal averaging for resolution enhancement of a defined object area, which would have a compact ultrasonic beam and allow frame rate to be traded for resolution, by means of signal averaging.
Abstract: According to elementary theory, the resolution of an ultrasonic imaging system increases with the ultrasonic frequency. However, frequency is limited by frequency-dependent attenuation. For imaging at any required depth, resolution improvement beyond the limit imposed by ultrasonic frequency can be obtained by increasing the ultrasonic intensity. This is itself, however, dependent on safety considerations and the effects of nonlinearity. In homogeneous media, image resolution increases with decreasing f-number. Particularly at low f-numbers, however, tissue inhomogeneity leads to a deterioration in image quality. Inhomogeneity may also be considered in terms of phase aberration. It has been found that for a given aperture, image degradation due to phase aberration is worse at higher frequencies. Schemes have been proposed for correction of this problem, but so far model systems do not lend themselves to clinical application. Deconvolution is unsatisfactory, speed correction is impracticable and synthetic aperture scanning and holography are virtually useless in biological tissues. Ultrasound-computed tomography has had only limited success. Speckle reduction can improve target detectability, but at the expense of resolution. Time-frequency control provides a useful partial solution to the problem of resolution reduction resulting from attenuation. It is clear that improved resolution would result in significant clinical benefits. An optimisation system for aperture size and ultrasonic frequency is proposed with signal averaging for resolution enhancement of a defined object area. This would have a compact ultrasonic beam and would allow frame rate to be traded for resolution, by means of signal averaging.

Journal ArticleDOI
TL;DR: The results indicate that moving to high resolution imaging matrices requires consideration be given to the sacrifice in low contrast detectability that occurs, and it is shown that filtering a high resolution image to a lower resolution image, through nearest neighbor averaging, does not regain the detectability lost in initially collecting the highresolution image.
Abstract: With the introduction of fast scan techniques and high field imagers, the ability to achieve very high resolution MR images in reasonable imaging times is now possible. Increased resolution allows for better detection of small, high contrast pathological features, but at some cost. Increasing resolution leads to a nonrecoverable decrease in signal-to-noise ratio per pixel and a loss of low contrast detectability for constant imaging time. This article examines the tradeoffs between image resolution, signal-to-noise ratio, and low contrast detectability in MR imaging. Contrast detail curves are presented for images collected in a constant imaging time, with constant field of view and bandwidth but at different resolutions, and these are compared with theoretical curves. The problem of measuring contrast levels in magnitude images, with different resolutions and receiver attenuation values, is discussed and a definition that accommodates these parameters developed. In addition, a clinical example is shown demonstrating a decrease in soft tissue differentiation with increasing resolution, again for fixed imaging time. The results indicate that moving to high resolution imaging matrices requires consideration be given to the sacrifice in low contrast detectability that occurs. Most importantly, it is shown that filtering a high resolution image to a lower resolution image, through nearest neighbor averaging, does not regain the detectability lost in initially collecting the high resolution image.

Patent
20 Feb 1991
TL;DR: In this paper, a motion estimator makes a decision on a block-by-block basis whether to use the original image or the residual to avoid the unattractive ghost of the previous scene persisting for a short time in a new scene, and the quantization coarseness is adaptively varied based on a computation of the number of bits necessary to statistically code a particular frame.
Abstract: Image quality is improved in high definition television using multi-scale representation of motion compensated residuals. The bandwidths of the subband filters vary with the frequency band and the total number of coefficients in the multi-scale-represented frames is equal to the number of values in the residual. Image initialization in the receivers is achieved using original image leakage, but the leakage factor is varied for different frequency subbands. To free up channel capacity at scene changes, a global (i.e., substantially frame-wide) decision is made as to whether to motion compensate a particular frame. To avoid the unattractive ghost of the previous scene persisting for a short time in a new scene, the motion estimator makes a decision on a block-by-block basis whether to use the original image or the residual. Chrominance resolution is improved by encoding all of the subbands of the chroma residuals, instead of just the low subbands. The chroma residuals are encoded at relatively coarser quantization than the luma residual, but when the energy of the luma residual is low (as, e.g. may occur when there is little motion), chroma quantization is improved, by making an overall (both chroma and luma) reduction in quantization step size. Runlength-amplitude representation and statistical coding are used. Runlength-amplitude representation is applied to entire subbands, and, preferably, different codebooks are used in statistically coding different subbands, to take advantage of the different statistics in the different subbands. The quantization coarseness is adaptively varied based on a computation of the number of bits necessary to statistically code a particular frame, thus guaranteeing for each frame exactly (or approximately, if a small buffer is provided in the decoder) the number of bits available in the channel.

Proceedings ArticleDOI
01 Jul 1991
TL;DR: A method for producing holographic stereograms with reduced geometrical constraints is presented and can offer a combination of large viewing zone, arbitrary viewing distance, minimal image distortion, and high spatial resolution, depending on alterable parameters in the image processing software.
Abstract: A method for producing holographic stereograms with reduced geometrical constraints is presented. The type of holographic stereogram produced, called the ULTRAGRAM, can offer a combination of large viewing zone, arbitrary viewing distance, minimal image distortion, and high spatial resolution, depending on alterable parameters in the image processing software. Computer-based image processing techniques are used to mimic the effect of optical devices while permitting simple re-configurability. The ULTRAGRAM holographic exposure apparatus can be built with reduced attention to the final viewing geometry. An astigmatic computer graphics camera design greatly simplifies image generation. The techniques described are applicable to both one and multi-step stereograms, optical predistortion methods, and both horizontal and full-parallax systems.

Patent
05 Mar 1991
TL;DR: In this article, a system and method for processing images viewed on radiographic film is presented, which includes a scanning laser film digitizer for reading the optical density of pixels in an original film image; a controller which analyzes the image data based on input film type and scanned optical density in order to compute an exposure correction factor which enhances the original image quality; and a laser film writer which creates a new copy of the original data in the improved form.
Abstract: A system and method for processing images viewed on radiographic film. Such images include mammograms, CAT scans, or other standard x-ray type images. The system includes a scanning laser film digitizer for reading the optical density of pixels in an original film image; a controller which analyzes the image data based on input film type and scanned optical density in order to compute an exposure correction factor which enhances the original image quality; and a laser film writer which creates a new copy of the original data in the improved form. The system is capable of writing several proportional non-overlapping images on a single sheet of film. This feature facilitates diagnostic evaluation of related subject matter such as multiple angle views of an injury, simultaneous views of right and left side mammograms, or several CAT scans on a single exposure. The system uses a look-up table to store and access optical density vs. exposure characteristics for several types of radiographic film. This exposure data is used in image processing functions. The look-up table information is also used in the copy writing procedure to insure the correct exposure range is used for a given film type.

Proceedings ArticleDOI
01 Jun 1991
TL;DR: In this paper, the authors assess whether information about viewing behavior can be used for image coding and conclude that incorporating information about watching behavior into video coding schemes may result in appreciable bandwidth savings.
Abstract: Large savings in bandwidth can be achieved when an image is displayed at full resolution at the center of gaze, and at lower resolution outside this central area. However, these savings require real-time monitoring of the observer's eye-position and real-time processing of the image. Hence, such techniques are limited to a single viewer. It would be useful if a reduction in bandwidth similar to that obtained with a single viewer could be achieved with multiple viewers, and without real- time monitoring of eye-movements. It is clear that this technique would be feasible only if different viewers looked at the same part of an image at the same time. In the present research, twenty-four observers viewed 15 forty-five-second clips of NTSC video while their direction of gaze was monitored. The goal of the research was to assess whether information about viewing behavior could be used for image coding. Our analysis of the viewing behavior showed that there was a substantial degree of agreement among viewers in terms of where they looked. We conclude that incorporating information about viewing behavior into video coding schemes may result in appreciable bandwidth savings.

Journal ArticleDOI
Guido Gerig1, Ron Kikinis, W. Kuoni, G. K. von Schulthess, Olaf Kübler 
TL;DR: An image analysis scheme is developed that allows the automatic detection of the organ contours, the extraction of the motion parameters per frame, and the registration of images that results in a readjusted image sequence, where organs of interest remain fixed.
Abstract: The most important problem in the analysis of time sequences is the compensation for artifactual motion. Owing to motion, medical images of the abdominal region do not represent organs with fixed configuration. Analysis of organ function with dynamic contrast medium studies using regions of interest (ROIs) is thus not readily accomplished. Images of the organ of interest need to be registered and corrected prior to a detailed local analysis. We have developed an image analysis scheme that allows the automatic detection of the organ contours, the extraction of the motion parameters per frame, and the registration of images. The complete procedure requires only minimal user interaction and results in a readjusted image sequence, where organs of interest remain fixed. Both a visual analysis of the dynamic behavior of functional properties and a quantitative statistical analysis of signal intensity versus time within local ROIs are considerably facilitated using the corrected series.

Proceedings ArticleDOI
14 Apr 1991
TL;DR: Using the EM algorithm, the authors restore a blurred image and quantify the improvement in image quality with both the new metric and the mean square error (MSE).
Abstract: A new image quality metric consistent with the properties of the human visual system is derived. Using the EM algorithm, the authors restore a blurred image and quantify the improvement in image quality with both the new metric and the mean square error (MSE). From these results, the advantages of the new metric are obvious. The EM algorithm is modified according to the underlying mathematical structure of the new metric, which results in improved performance. >

Patent
Takemura Yasuo1
29 Mar 1991
TL;DR: In this paper, the image signal from the signal processing circuit is inputted into the image quality changeover circuit having a plurality of signal processing paths, each with a peculiar signal processing characteristic.
Abstract: In a multi-function digital CCD camera according to the present invention, the image signal outputted from the solid-state image pick-up device is inputted into a signal processing circuit where the signal is shaped into an image signal. The image signal from the signal processing circuit is inputted into the image quality changeover circuit having a plurality of signal processing paths, each with a peculiar signal processing characteristic. In the image quality changeover circuit, the plurality of signal processing paths are combined based on a control signal from the image quality selective circuit. The image signal obtained from respective signal processing paths are inputted into the image composition circuit. The signal obtained from the image composition circuit is inputted into the color encoder where the signal is encoded into a camera output video signal.

Journal ArticleDOI
TL;DR: A formal study of the effect of two basic scanning parameters, slice thickness and slice spacing, on image quality and Statistical analysis demonstrated that slice interval was of primary importance and slice collimation was of secondary, although significant, importance in determining perceived 3D image quality.
Abstract: Of the many steps involved in producing high quality three-dimensional (3D) images of CT data, the data acquisition step is of greatest consequence. The principle of "garbage in, garbage out" applies to 3D imaging--bad scanning technique produces equally bad 3D images. We present a formal study of the effect of two basic scanning parameters, slice thickness and slice spacing, on image quality. Three standard test objects were studied using variable CT scanning parameters. The objects chosen were a bone phantom, a cadaver femur with a simulated 5 mm fracture gap, and a cadaver femur with a simulated 1 mm fracture gap. Each object was scanned at three collimations: 8, 4, and 2 mm. For each collimation, four sets of scans were performed using four slice intervals: 8, 4, 3, and 2 mm. The bone phantom was scanned in two positions: oriented perpendicular to the scanning plane and oriented 45 degrees from the scanning plane. Three-dimensional images of the resulting 48 sets of data were produced using volumetric rendering. Blind review of the resultant 48 data sets was performed by three reviewers rating five factors for each image. The images resulting from scans with thin collimation and small table increments proved to rate the highest in all areas. The data obtained using 2 mm slice intervals proved to rate the highest in perceived image quality. Three millimeter slice spacing with 4 mm collimation, which clinically provides a good compromise between image quality and acquisition time and dose, also produced good perceived image quality. The studies with 8 mm slice intervals provided the least detail and introduced the worst inaccuracies and artifacts and were not suitable for clinical use. Statistical analysis demonstrated that slice interval (i.e., table incrementation) was of primary importance and slice collimation was of secondary, although significant, importance in determining perceived 3D image quality.

Journal ArticleDOI
TL;DR: The use of several alternative image-brightness-based quality factors for phase aberration correction with diffuse and point targets in coherent ultrasonic imaging systems is explored, finding good agreement between theoretical analysis, computer simulations, and phantom experiments.
Abstract: The use of several alternative image-brightness-based quality factors for phase aberration correction with diffuse and point targets in coherent ultrasonic imaging systems is explored. The factors are similar to the sharpness functions proposed by R.A. Muller et al. (1974) for incoherent imaging systems. Different region of interest (ROI) sizes are used to compare the quality factors. Good agreement is found between theoretical analysis, computer simulations, and phantom experiments. For a point target, the mean of the magnitude of echo signals is inferior to the other studied quality factors in correcting phase aberration. All of the quality factors studied are similar in their ability to correct phase errors for diffuse target images. >

Patent
19 Nov 1991
TL;DR: In this article, a frequency distribution of one frame period of a dynamic range in each block is detected, and the coding condition is determined by the frequency distribution and a predetermined N kinds of characteristic tables.
Abstract: Digital video signals are divided in blocks, maximum and minimum values of each block are extracted, and a frequency distribution of one frame period of a dynamic range in each block is detected. The coding condition is determined by the frequency distribution and a predetermined N kinds of characteristic tables, and by determining only one characteristic table, the signal obtained by subtracting the minimum value of each block from the digital video signals of each block is coded in variable length according to the dynamic range of each block. The characteristic table shows the relationship of the distortion or the number of quantization bits between the original signal and decoded signal after coding and decoding with respect to the dynamic range of digital video signal in each block, and the picture quality of decoded image is set as a parameter.

Journal ArticleDOI
TL;DR: An experimental system, which has been developed to investigate speed-of-imaging and other forms of in-vivo ultrasound CT, is described, along with the techniques used for data acquisition and image reconstruction.
Abstract: The reconstruction of the speed-of-sound distribution within a target can be achieved by CT techniques from measurements on transmitted ultrasonic pulses. The mathematical relationship between speed-of-sound imaging and the conventional CT situation is explained. An experimental system, which has been developed to investigate speed-of-imaging and other forms of in-vivo ultrasound CT, is described, along with the technique used for data acquisition and image reconstruction. These include measurement of pulse time-of-flight by the threshold or cross-correlation methods. Techniques for reducing artifacts in speed-of-sound images are also described, such as median filtering and modified Shepp-Logan filtering. These techniques have been used to obtain high quality speed-of-sound images of various phantoms.

Journal ArticleDOI
02 Nov 1991
TL;DR: In this article, the authors studied the effect of adding additional set of coincidence planes with a plane separation of + or 2 on the detection efficiency of the PET system and showed a reduction in noise and improved uniformity without a significant loss in resolution.
Abstract: Whole body PET (positron emission tomography) imaging is performed by acquiring data at multiple axial positions. From this data set, coronal and sagittal cross sectional images are formed by reorienting the transaxial tomographic images. Due to the short acquisition time at each axial position the noise levels in the final images are relatively high. The aim of the present work is to optimize some of the scanning parameters for whole body PET imaging to achieve the best possible image quality. It is noted that the detection efficiency of the PET system can be improved by using more coincidence plane combinations in addition to the conventional direct and cross planes. The effect of acquiring an additional set of coincidence planes with a plane separation of +or-2 was studied and showed a reduction in noise and improved uniformity without a significant loss in resolution. The effect of different sampling schemes was also studied. Using a continuous sampling scheme by moving the bed in sets of 3.38 mm results in better image uniformity together with a reduction in noise in comparison with images acquired using the standard interleaved sampling mode. >