scispace - formally typeset
Search or ask a question

Showing papers on "Image quality published in 1992"


Proceedings ArticleDOI
27 Aug 1992
TL;DR: An algorithm for determining whether the goal of image fidelity is met as a function of display parameters and viewing conditions is described, intended for the design and analysis of image processing algorithms, imaging systems, and imaging media.
Abstract: Image fidelity is the subset of overall image quality that specifically addresses the visual equivalence of two images. This paper describes an algorithm for determining whether the goal of image fidelity is met as a function of display parameters and viewing conditions. Using a digital image processing approach, this algorithm is intended for the design and analysis of image processing algorithms, imaging systems, and imaging media. The visual model, which is the central component of the algorithm, is comprised of three parts: an amplitude nonlinearity, a contrast sensitivity function, and a hierarchy of detection mechanisms.

396 citations


Journal ArticleDOI
TL;DR: Using mammogram images digitized at high resolution (less than 0.1 mm pixel size), it is shown that the validity of microcalcification clusters and anatomic details is considerably improved in the processed images.
Abstract: Diagnostic features in mammograms vary widely in size and shape. Classical image enhancement techniques cannot adapt to the varying characteristics of such features. An adaptive method for enhancing the contrast of mammographic features of varying size and shape is presented. The method uses each pixel in the image as a seed to grow a region. The extent and shape of the region adapt to local image gray-level variations, corresponding to an image feature. The contrast of each region is calculated with respect to its individual background. Contrast is then enhanced by applying an empirical transformation based on each region's seed pixel value, its contrast, and its background. A quantitative measure of image contrast improvement is also defined based on a histogram of region contrast and used for comparison of results. Using mammogram images digitized at high resolution (less than 0.1 mm pixel size), it is shown that the validity of microcalcification clusters and anatomic details is considerably improved in the processed images. >

377 citations


Journal ArticleDOI
TL;DR: The value of two new section-interpolation algorithms is primarily seen in the improved quality of multiplanar reformations and cine and three-dimensional displays.
Abstract: Spiral computed tomography (CT) offers continuous volume scanning of complete organs or body sections within a single breath hold. Almost all image quality characteristics of spiral CT are identical to those of conventional section-by-section CT; however, there is a change in pixel noise values and degradation in the shape of the section sensitivity profiles (SSPs). Computer simulations, phantom measurements, and clinical studies were used in evaluating the SSP and noise characteristics of two new section-interpolation algorithms. The results were compared with standard CT and spiral CT data processed with the commonly employed linear section-interpolation algorithm. Degradation of SSP quality was insignificant for a table feed distance per 360 degrees revolution equal to the section thickness when the new algorithms were applied; noise values, however, increased. SSP width increased for table feed distances greater than the section width, the effect being less pronounced with the new algorithms. The value of these algorithms is primarily seen in the improved quality of multiplanar reformations and cine and three-dimensional displays.

316 citations


Patent
01 Jul 1992
TL;DR: In this article, an image processing apparatus comprises a receiving device for receiving encoded image data which is encoded by using orthogonal transform in a predetermined block unit, an error detecting device for detecting a transmission error of the encoded data, a correcting device for correcting the transmission error in the predetermined block units, and a decoding device for decoding the encoded image and outputting image data for reproducing an image.
Abstract: An image processing apparatus comprises a receiving device for receiving encoded image data which is encoded by using orthogonal transform in a predetermined block unit, an error detecting device for detecting a transmission error of the encoded image data, a correcting device for correcting the transmission error in the predetermined block unit, and a decoding device for decoding the encoded image data and outputting image data for reproducing an image. The image processing apparatus satisfactorily controls the amount of compressed data and prevents a deterioration of image quality even if any error occurs on a transmission path, thereby reproducing good image quality.

251 citations


Journal ArticleDOI
TL;DR: An objective image quality measure based on the digital image power spectrum of normally acquired arbitrary scenes is developed, which utilizes the previously known invariance property for the power spectra of arbitrary scenes.
Abstract: An objective image quality measure based on the digital image power spectrum of normally acquired arbitrary scenes is developed. This image quality measure, which does not require imaging either designed targets or a constant scene, utilizes the previously known invariance property for the power spectra of arbitrary scenes. The measure incorporates a representation of the human visual system, a novel approach to account for directional differences in perspective (scale) for obliquely acquired scenes, and a filter developed to account for imaging system noise as specifically evidenced in the image power spectra. The primary application is to assess the quality of digital images relevant to the image task of detection, recognition, and identification of man-made objects from softcopy displayed versions of visible spectral region digital aerial images. Experimental verification is presented demonstrating very good correlation (r=0.9) of this objective quality measure with visual quality assessments.

218 citations


Journal ArticleDOI
TL;DR: In this article, an electro-optic apparatus capable of displaying a computer-generated hologram in real time is described. But the display resolution can be increased by simultaneously writing three acoustic columns on a single crystal and optically multiplexing the resulting holograms.
Abstract: We describe an electro-optic apparatus capable of displaying a computer-generated hologram in real time. The computer-generated hologram is calculated by a supercomputer, read from a fast frame buffer, and transmitted to a wide-bandwidth acousto-optic modulator. Coherent light is modulated by the acousto-optic modulator and optically processed to produce a three-dimensional image with horizontal parallax. We evaluate different display geometries and their effect on the optical parameters of the system. We then show how the display resolution can be increased by simultaneously writing three acoustic columns on a single crystal and optically multiplexing the resulting holograms. We finally describe some improvements that follow from the analysis.

169 citations


Journal ArticleDOI
TL;DR: The results indicate that SRA with phase correction holds promise in improving ultrasonic image quality.
Abstract: For Pt.I see ibid., vol.39, p.489 (1992). The effects of tissue/transducer motion and artifacts from adaptive focusing on synthetic receive aperture (SRA) imaging are explored using experiment, simulation, and theory. The impact of these issues on the selection of SRA subaperture geometry is discussed, and a technique to address this problem is demonstrated. The results indicate that SRA with phase correction holds promise in improving ultrasonic image quality. >

143 citations


Patent
23 Mar 1992
TL;DR: In this article, an adaptive transform coding algorithm for a still image utilizes a quadtree based variable block size discrete cosine transform to achieve a better tradeoff between bit rate and image quality.
Abstract: An adaptive transform coding algorithm for a still image utilizes a quadtree based variable block size discrete cosine transform to achieve a better tradeoff between bit rate and image quality. The choice of appropriate block size is made according to a mean based decision rule which can discriminate various image contents for better visual quality.

141 citations


Proceedings ArticleDOI
J.M. Shapiro1
23 Mar 1992
TL;DR: A simple, yet remarkably effective, image compression algorithm that has the property that the bits in the bit stream are generated in order of importance, yielding fully hierarchical image compression suitable for embedded coding or progressive transmission is described.
Abstract: A simple, yet remarkably effective, image compression algorithm that has the property that the bits in the bit stream are generated in order of importance, yielding fully hierarchical image compression suitable for embedded coding or progressive transmission, is described. Given an image bit stream, the decoder can cease decoding at the same image that would have been encoded at the bit rate corresponding to the truncated bit stream. The compression algorithm is based on three key concepts: (1) wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, and (3) hierarchical entropy-coded quantization. >

137 citations


Journal ArticleDOI
TL;DR: It is shown that the best transforms for transform image coding, namely, the scrambled real discrete Fourier transform, the discrete cosine transform, and the discrete Cosine-III transform are also the best for image enhancement.
Abstract: Blockwise transform image enhancement techniques are discussed. Previously, transform image enhancement has usually been based on the discrete Fourier transform (DFT) applied to the whole image. Two major drawbacks with the DFT are high complexity of implementation involving complex multiplications and additions, with intermediate results being complex numbers, and the creation of severe block effects if image enhancement is done blockwise. In addition, the quality of enhancement is not very satisfactory. It is shown that the best transforms for transform image coding, namely, the scrambled real discrete Fourier transform, the discrete cosine transform, and the discrete cosine-III transform, are also the best for image enhancement. Three techniques of enhancement discussed in detail are alpha-rooting, modified unsharp masking, and filtering motivated by the human visual system response (HVS). With proper modifications, it is observed that unsharp masking and HVS-motivated filtering without nonlinearities are basically equivalent. Block effects are completely removed by using an overlap-save technique in addition to the best transform.

129 citations


Journal ArticleDOI
TL;DR: This work proposes a method to find visually optimized error diffusion filters for monochrome and color image display applications based on the low-pass characteristic of the contrast sensitivity of the human visual system.
Abstract: Displaying natural images on an 8-bit computer monitor requires a substantial reduction ofphysically distinct colors. Simple minimum mean squared error quantization with 8 levels of red and green and 4 levels of blue yields poor image quality. A powerful means to improve the subjective quality of a quantized image is error diffusion. Error diffusion works by shaping the spectrum of the display error. Considering an image in raster ordering, this is done by adding a weighted sum of previous quantization errors to the current pixel before quantization. These weights form an error diffusion filter. We propose a method to find visually optimized error diffusion filters for monochrome and color image display applications. The design is based on the low-pass characteristic of the contrast sensitivity of the human visual system. The filter is chosen so that a cascade of the quantization system and the observer's visualmodulation transfer function yields a whitened error spectrum. The resulting images contain mostly high-frequency components of the display error, which are less noticeable to the viewer. This corresponds well with previously published results about the visibility of halftoning patterns. An informal comparison with other error diffusion algorithms shows less artificial contouring and increased image quality.

Journal ArticleDOI
TL;DR: The data indicate that the pattern-integration mechanism has an inherent order bias and does not accumulate spatial information so efficiently when the ‘natural’ coarse-to-fine order is violated.
Abstract: Factors which govern the temporal integration of spatial information were examined in a group of five experiments. A series of high-pass and low-pass spatially filtered versions of a visual scene were generated. Observers' ratings of these filtered versions of the scene for perceived image quality indicated that quality was determined both by the bandwidth of spatial information and the presence of high-spatial-frequency edge information. When sequences of three different versions of the scene were presented over an interval of 120 ms the perceived quality of the resulting composite image was determined both from the ratings of the individual components of that sequence and from the order in which these components were presented. When the order of spatial information in a sequence moved from coarse to fine detail the perceived quality of the composite image was significantly better than when the order moved from fine to coarse. This evidence of a coarse-to-fine bias in pattern integration was further inve...

Proceedings ArticleDOI
27 Aug 1992
TL;DR: In this article, the authors used Minkowski-metric as a combination rule for small impairments like those usually encountered in digitally coded images, which can be represented by a set of orthogonal vectors along the axes of a multidimensional Euclidean space.
Abstract: The urge to compress the amount of information needed to represent digitized images while preserving perceptual image quality has led to a plethora of image-coding algorithms. At high data compression ratios, these algorithms usually introduce several coding artifacts, each impairing image quality to a greater or lesser extent. These impairments often occur simultaneously. For the evaluation of image-coding algorithms, it is important to find out how these impairments combine and how this can be described. The objective of the present study is to show that Minkowski-metrics can be used as a combination rule for small impairments like those usually encountered in digitally coded images. To this end, an experiment has been conducted in which subjects assessed the perceptual quality of scale-space-coded color images comprising three kinds of impairment, viz., 'unsharpness', 'phantoms' (dark/bright patches within bright/dark homogeneous regions) and 'color desaturation'. The results show an accumulation of these impairments that is efficiently described by a Minkowski-metric with an exponent of about two. The latter suggests that digital-image-coding impairments may be represented by a set of orthogonal vectors along the axes of a multidimensional Euclidean space. An extension of Minkowski-metrics is presented to generalize the proposed combination rule to large impairments.

Journal ArticleDOI
25 Oct 1992
TL;DR: In this paper, the spatial resolution in a reconstructed single photon emission computed tomography (SPECT) image is influenced by the intrinsic resolution of the detector, and the photon-counting efficiency of SPECT systems is also determined by intrinsic resolution.
Abstract: The spatial resolution in a reconstructed single photon emission computed tomography (SPECT) image is influenced by the intrinsic resolution of the detector, and the photon-counting efficiency of SPECT systems is also determined by the intrinsic resolution. The authors demonstrate that improvements in detector resolution can lead to both improved spatial resolution in the image and improved counting efficiency compared to conventional systems. This paradoxical conclusion results from optimizing the geometry of a multiple-pinhole coded-aperture system when detectors of very high resolution are available. Simulation studies that demonstrate the image quality that is attainable with such detectors are reported. Reconstructions are performed using an iterative search algorithm on a custom-designed parallel computer. The imaging system is described by a calculated system matrix relating all voxels in the object space to all pixels on the detector. A resolution close to 2 mm is found on the reconstructed images obtained from these computer simulations with clinically reasonable exposure times. This resolution may be even further improved by optimization of the multiple-pinhole aperture. >

Patent
01 Dec 1992
TL;DR: In this article, an overlapped motion compensation unit and method is proposed to minimize blocking effects prevalent in conventional motion compensation and discrete cosine transforms (DCT) for image coding.
Abstract: Our overlapped motion compensation unit and method, which is a motion compensation mechanism employing an overlapped block structure, minimize blocking effects prevalent in convention motion compensation. Our overlapped motion compensation unit and method are implemented on the basis of analysis/synthesis filter banks employed for coding resulting in compatibility between the block structure used for motion compensation and for coding. Our encoder, decoder, and coding method employ our novel overlapped motion compensation technique in combination with analysis/synthesis filter banks such as LOT to achieve improvements in coding efficiency and image quality above that of conventional image coders and coding methods. Specifically, in our encoder, decoder, and coding method, blocking effects prevalent in coders employing conventional motion compensation techniques and discrete cosine transforms are minimized and coding efficiency and image quality are maximized.

Journal ArticleDOI
TL;DR: The proposed algorithm estimates simultaneously the contour and the model parameters by implementing an adaptive version of the iterated conditional modes algorithm and exhibits robustness against image artifacts and the contours obtained are considered good by expert clinicians.
Abstract: A method for left ventricular contour determination in digital angiographic images is presented. The problem is formulated in a Bayesian framework, adopting as the estimation criterion the maximum a posterior probability (MAP). The true contour is modeled as a one-dimensional noncausal Gauss-Markov random field and the observed image is described as the superposition of an ideal image (deterministic function of the real contour) with white Gaussian noise. The proposed algorithm estimates simultaneously the contour and the model parameters by implementing an adaptive version of the iterated conditional modes algorithm. The convergence of this scheme is proved and its performance evaluated on both synthetic and real angiographic images. The method exhibits robustness against image artifacts and the contours obtained are considered good by expert clinicians. Being completely data-driven and fast, the proposed algorithm is suitable for routine clinical use. >

Journal ArticleDOI
TL;DR: The conclusion is that the eigenimage filter is the optimal linear filter that achieves SDF and CPV simultaneously.
Abstract: The performance of the eigenimage filter is compared with those of several other filters as applied to magnetic resonance image (MRI) scene sequences for image enhancement and segmentation. Comparisons are made with principal component analysis, matched, modified-matched, maximum contrast, target point, ratio, log-ratio, and angle image filters. Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), segmentation of a desired feature (SDF), and correction for partial volume averaging effects (CPV) are used as performance measures. For comparison, analytical expressions for SNRs and CNRs of filtered images are derived, and CPV by a linear filter is studied. Properties of filters are illustrated through their applications to simulated and acquired MRI sequences of a phantom study and a clinical case; advantages and weaknesses are discussed. The conclusion is that the eigenimage filter is the optimal linear filter that achieves SDF and CPV simultaneously. >

Proceedings ArticleDOI
01 Nov 1992
TL;DR: A morphological approach to the problem of unsupervised image segmentation based on a multiscale approach which allows a hierarchical processing of the data ranging from the most global scale to the most detailed one.
Abstract: This paper deals with a morphological approach to the problem of unsupervised image segmentation. The proposed technique relies on a multiscale approach which allows a hierarchical processing of the data ranging from the most global scale to the most detailed one. At each scale, the algorithm relies on four steps: preprocessing, feature extraction, decision and quality estimation. The goal of the preprocessing step is to simplify the original signal which is too complex to be processed at once. Morphological filters by reconstruction are very attractive for this purpose because they simplify without corrupting the contour information. The feature extraction intends to extract the pertinent parameters for assessing the degree of homogeneity of the regions. To this goal, morphological techniques extracting flat or contrasted regions are very efficient. The decision step defines precisely the contours of the regions. This decision is achieved by a watershed algorithm. Finally, the quality estimation is used to compute the information that has to be further processed by the next scale to improve the segmentation result. The estimation is based on a region modeling procedure. The resulting segmentation is very robust and can deal with very different types of images. Moreover, the first levels give segmentation results with a few regions; but precisely located contours.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: The number of unknown parameters to be calibrated is drastically decreased, enabling simple and useful calibration of a TV camera with a high-distortion lens.
Abstract: A simple and useful calibration method for a TV camera with a high-distortion lens is presented. The parameters to be calibrated are effective focal length, one-pixel width on an image plane, image distortion center, and distortion coefficient. A simple-pattern calibration chart composed of parallel straight lines is introduced as a reference for calibration. An ordinary 2D model fitting is decomposed into two 1D model fittings on the column and row of a frame buffer across the image distortion center by ingeniously utilizing the point symmetry characteristic of image distortion. Some parameters with a calibration chart are eliminated by setting up a calibration chart precisely and by utilizing negligibly low distortion near the image distortion center. Thus, the number of unknown parameters to be calibrated is drastically decreased, enabling simple and useful calibration. The effectiveness of the proposed calibration method is confirmed by experimentation. >

Journal ArticleDOI
01 Mar 1992
TL;DR: The method presented here enhances the luminance contrast of a color image by using its color attributes to reduce the impact of blur and poor contrast.
Abstract: This paper introduces a multiscale technique to improve the visual appearance of color images. The saturation and luminance components of a color image often contain complementary information. Details of an image that have low luminance contrast are sometimes distinguished from their background by their color saturation. Therefore, color image contrast can be enhanced by modulating the luminance component with the variations in the saturation component. Images generally contain details of many different sizes. In the present method the luminance and saturation components of a color image are separately decomposed into contrast primitives of different spatial scales. A new set of multiscale luminance contrast primitives is then constructed by modulating the original luminance primitives at every location in the image and at every spatial scale by the corresponding saturation contrast primitives. Reconstruction of the color image from the resulting set of multiscale primitives provides a representation of the original image in which local luminance contrast is clearly enhanced at all levels of resolution. The perceptual quality of the contrast enhanced color image is further improved by the application of a color saturation algorithm.

Proceedings ArticleDOI
27 Aug 1992
TL;DR: It is found that the principled approach can lead to a range of useful halftoning algorithms, as the authors trade off speed for quality by varying the complexity of the quality measure and the thoroughness of the search.
Abstract: When models of human vision adequately measure the relative quality of candidate halftonings of an image, the problem of halftoning the image becomes equivalent to the search problem of finding a halftone that optimizes the quality metric. Because of the vast number of possible halftones, and the complexity of image quality measures, this principled approach has usually been put aside in favor of fast algorithms that seem to perform well. We find that the principled approach can lead to a range of useful halftoning algorithms, as we trade off speed for quality by varying the complexity of the quality measure and the thoroughness of the search. High quality halftones can be obtained reasonably quickly, for example, by using as a measure the vector length of the error image filtered by a contrast sensitivity function, and, as the search procedure, the sequential adjustment of individual pixels to improve the quality measure. If computational resources permit, simulated annealing can find nearly optimal solutions.

Patent
02 Sep 1992
TL;DR: In this paper, the camera light budget is increased by increasing the light available to the camera, while maintaining a common optic axis between the camera and the display screen to improve image quality.
Abstract: Improved image quality is realized in a parallax-free teleconferencing display by increasing the camera light budget, that is, by increasing the light available to the camera, while maintaining a common optic axis between the camera and the display screen. Light-attenuating devices, such as color filters, are repositioned out of the path of light entering the camera. In this manner, image quality is improved while color capability is maintained.

Patent
30 Nov 1992
TL;DR: In this paper, a method for compressing endoscope image data is realized by an image compressing apparatus capable of compressing image data at different compressing rates, based on this determination, is determined to be either ordinary image data or dyed image data.
Abstract: An endoscope image data compressing apparatus comprises a plurality of image compressing apparatus not equivalent to each other for compressing input endoscope image data and outputting the compressed data and a selecting apparatus for selecting the compressed data output from at least one of the image compressing apparatus. The selecting apparatus selects the compressing method in response to the kind of the endoscope, characteristic of the image, picture quality of the compressed image, recording time intervals and others. A method for compressing endoscope image data is realized by an image compressing apparatus capable of compressing image data at different compressing rates. The endoscope image data is determined to be either ordinary image data or dyed image data and, based on this determination, is compressed at different rates. The compressing rate for dyed image data is lowered below the rate for ordinary image data.

Journal ArticleDOI
TL;DR: A statistical analysis of the maximum intensity projection (MIP) algorithm, which is commonly used for MR angiography (MRA), explains why MIP projection images display as much as a twofold increase in signal‐ and contrast‐to‐noise ratios over those of the source image set.
Abstract: We present a statistical analysis of the maximum intensity projection (MIP) algorithm, which is commonly used for MR angiography (MRA). The analysis explains why MIP projection images display as much as a twofold increase in signal- and contrast-to-noise ratios over those of the source image set. This behavior is demonstrated with simulations and in phantom and MRA image sets. © 1992 Academic Press, Inc.

Journal ArticleDOI
TL;DR: In the typical CT scanner, a narrow x-ray beam in the section thickness direction and an air gap in the sections plane are used to reduce scatter and improve contrast.
Abstract: Understanding how contrast is produced and controlled in computed tomography (CT) is essential to proper application of this modality. In the typical CT scanner, a narrow x-ray beam in the section thickness direction and an air gap in the section plane are used to reduce scatter and improve contrast. High- and low-contrast detectability of a CT scanner are important performance parameters contributing to optimal image quality. The limits of detectability of high-contrast objects (ie, spatial resolution) are affected by detector aperture size, pixel size of the image, algorithm used to reconstruct the image, and section thickness. Visibility of low-contrast objects is limited by image noise and the algorithm. Contrast in CT images can be controlled by the window level and window width settings used to display the image. These settings dictate how the actual measurements of tissue attenuation are translated into a gray-scale image. Wide window widths can be used to provide an accurate representation of bone, and narrow widths are more useful for visualizing soft tissues.

Journal ArticleDOI
TL;DR: A filtered backprojection reconstruction algorithm was developed for cardiac single photon emission computed tomography with cone-beam geometry and it is found that reconstruction of attenuated projections has a greater effect upon quantitation and image quality than any potential cone- Beam reconstruction artifacts resulting from insufficient sampling of cone- beam projections.
Abstract: A filtered backprojection reconstruction algorithm was developed for cardiac single photon emission computed tomography with cone-beam geometry The algorithm reconstructs cone-beam projections collected from 'short scan' acquisitions of a detector traversing a noncircular planar orbit Since the algorithm does not correct for photon attenuation, it is designed to reconstruct data collected over an angular range of slightly more than 180 degrees with the assumption that the range of angles is oriented so as not to acquire the highly attenuated posterior projections of emissions from cardiac radiopharmaceuticals This sampling scheme is performed to minimize the attenuation artifacts that result from reconstructing posterior projections From computer simulations, it is found that reconstruction of attenuated projections has a greater effect upon quantitation and image quality than any potential cone-beam reconstruction artifacts resulting from insufficient sampling of cone-beam projections With nonattenuated projection data, cone beam reconstruction errors in the heart are shown to be small (errors of at most 2%) >

Patent
03 Nov 1992
TL;DR: In this article, the authors proposed a method to improve the overall quality of photographed images through user selection of a desired display size and/or focal length photographing mode for each image to be captured followed by an optimization, for that size and mode, of various photographic exposure parameters.
Abstract: Exposure control apparatus, and various accompanying methods, for use in a photographic camera for improving the overall quality of photographed images, i.e. increasing the number of acceptable and higher quality images, that are produced by the camera for user-selected non-standard display sizes and/or different focal length photographing modes over that obtainable by adherence to ISO/ANSI exposure standards. The quality improvement is attained through user selection of a desired display size and/or focal length photographing mode for each image to be captured followed by an optimization, for that size and mode, of various photographic exposure parameters (exposure settings and, where appropriate, flash parameters). The invention violates the ISO/ANSI exposure standards where necessary to improve image quality, for the desired display size and focal length photographing mode, beyond that which would result from adherence to these standards.

Patent
28 Feb 1992
TL;DR: In this article, a point transform engine performs a linear transformation on the input control points to produce a set of control points corresponding to the description of the scaled image, and a transition detection engine determines the transition pixels by scanning the vectors both horizontally and vertically to determine whether a scanline is crossed by the vector.
Abstract: Method and apparatus for outline font character generation in dot matrix devices including a point transform engine, a parametric equation generator, an adaptive forward differencing engine, a transition detection engine, a transition processor, an intermediate storage, and a bitmap converter. The point transform engine performs a linear transformation on the input control points to produce a set of control points corresponding to the description of the scaled image. The parametric equation generator computes a set of coefficients used in generating polysegments. The adaptive forward differencing engine generates a set of controlled length vectors representing the image, while the transition detection engine determines the transition pixels by scanning the vectors both horizontally and vertically to determine whether a scanline is crossed by the vector. The transition processor enhances the image quality and alters the number and location of the previously determined transition pixel. The transition locations for the image are accumulated in the intermediate storage until all curves of the image have been processed. The bitmap converter converts, on a line-by-line basis, the transition information into bitmap format.

Patent
24 Nov 1992
TL;DR: In this article, a radiographic imaging system includes a quality control workstation for processing a digital radiographic image signal in either a pass-through mode or a manual mode, where the user can also change image parameters to change the displayed radiographic images.
Abstract: A radiographic imaging system includes a quality control workstation for processing a digital radiographic image signal in either a pass-through mode or a manual mode In the pass-through mode, the digital radiographic image signal is automatically processed according to preselected image parameter values to produce one or more versions of image processed digital radiographic images which are automatically routed to preselected destinations In the manual mode, a preselected version of image processed digital radiographic image is displayed to be verified by a user The user can also change image parameters to change the displayed radiographic image The user then routes the image processed radiographic image signals to preselected destinations or user selected destinations Preferably, the digital radiographic image signal is produced by a storage phosphor reader which converts a latent radiographic image stored on a storage phosphor into said digital radiographic image signal

Journal ArticleDOI
TL;DR: It is concluded that adaptive TGC capable of applying a unique gain function to each part of the image can consistently produce better images than a single gain function set either automatically or manually.
Abstract: The quality of ultrasonic images is often adversely affected by incorrect time gain compensation (TGC) settings. TGC set up by the operator is inadequate for two reasons: firstly, one gain function is unlikely to be appropriate for all the scan lines in an image and secondly, the operator may not have sufficient time or experience to optimise it. Adaptive processing offers a solution to this problem; it has the potential both to improve image quality and to let the operators make more effective use of their time. The literature concerned with adaptive TGC is briefly reviewed. A microcomputer-controlled system has been used to develop various algorithms for adaptive TGC. The algorithms operate in real-time and have been tested using a grey-scale test object, and clinically in routine abdominal and obstetric scanning. The imaging characteristics of each algorithm are determined largely by the value of a parameter beta, described in the text. It is concluded that adaptive TGC capable of applying a unique gain function to each part of the image can consistently produce better images than a single gain function set either automatically or manually.