scispace - formally typeset
Search or ask a question

Showing papers on "Quantization (image processing) published in 1977"


Journal ArticleDOI
TL;DR: For a well-known dot profile, it is shown analytically how aliasing is suppressed, and that quantization contouring may be eliminated without an appreciable increase in aliasing.
Abstract: Screening techniques are widely used in the binary display of continuous-tone images by digital output devices. The quality of the halftone image resulting from such a nonlinear transformation is dependent on the dot profile employed. In the Fourier domain, aliasing degrades the halftone image. The relationship between image quality and dot profile is studied from this point of view. For a well-known dot profile, it is shown analytically how aliasing is suppressed, and that quantization contouring may be eliminated without an appreciable increase in aliasing.

78 citations


Proceedings ArticleDOI
08 Dec 1977
TL;DR: In this paper, the effects of quantization of the radar returns transmitted from aircraft or spacecraft employing a synthetic aperture radar system were evaluated in terms of crater scene, number of looks, and transmission error rate.
Abstract: A study is made of the effects of quantization of the radar returns transmitted from aircraft or spacecraft employing a synthetic aperture radar system. The study is based on the output images obtained after one-bit, two-bit, and eight-bit quantizations and comparing the results to ground truth. In this way the degradation resulting from data or bandwidth reduction is determined. Quantization is evaluated in terms of crater scene, number of looks, and transmission error rate. It is found that two-bit quantization of raw radar data from homogeneous scenes processed to 32 looks yields nearly all the details of the original. One-bit quantization of raw radar data from homogeneous scenes processed to 32 looks yields a good visual representation of the scene but some fine detail is lost and the absolute reflectivity level is not reliable. Image quality is observed to improve with more looks and video and intermediate frequency quantization are not distinguishable even for one-bit quantizations. Image quality is not influenced by bit error rates less than about 2 to the -7th power.

8 citations


01 May 1977
TL;DR: This paper examines several image segmentation algorithms which have been explored in the development of the VISIONS system and shows that the interaction between these two representations of data provides a view that is lacking in either.
Abstract: : This paper examines several image segmentation algorithms which have been explored in the development of the VISIONS system. Each of these algorithms can be viewed as a variation on a basic theme: the clustering of activity in feature space via histogram analysis, mapping these clusters back onto the image, and then isolating regions by analysis of the spatial relationships of the cluster labels. It is shown that the interaction between these two representations of data (global feature information and spatial information) provides a view that is lacking in either. The scene segmentation algorithms contain the following stages: (1) PLAN: reduce the amount of detail in the scene to a bare minimum by performing a fast simple segmentation into primary areas using spatial and/or quantization compression. (2) REFINE: resegment the scene with careful attention directed to the textural complexities of each region. The primitive transformations which are used include histogram clustering, region growing, data reduction by narrowing the quantization range, and/or data reduction by spatially collapsing the data while extracting features. These algorithms have been implemented using a parallel, hierarchical computational structure. Comparisons of performance on several images are given. (Author)

7 citations


Journal ArticleDOI
01 Oct 1977
TL;DR: The hybrid process is compared with the two-dimensional transform method in terms of bit rate, mean-square error, and computational complexity.
Abstract: Transform-DPCM (hybrid) techniques are applied to image data processing For achieving bandwidth reduction, sample selection (threshold or variance) and variable bit allocation are adopted The hybrid process is compared with the two-dimensional transform method in terms of bit rate, mean-square error, and computational complexity

5 citations


Proceedings ArticleDOI
08 Dec 1977
TL;DR: The results of this experiment suggest that image appearance may be improved by designing transform coefficient quantization rules to approximate the effects of additive noise rather than to omit low energy image components, as dictated by conventional rate-distortion theory.
Abstract: Rate-distortion theory using the mean squared error criterion is often used to design digital image coding rules. The resulting distortion is, in theory, statistically equivalent to omitting components of the image from transmission. We compare a rate-distortion simulation using the discrete cosine transform to a method which is statistically equivalent to adding uncorrelated random noise to the image. This latter method is based on a PN (pseudo-noise) transform, which is generated from a Hadamard matrix whose core consists of the cyclic shifts of a binary maximum length linear shift register sequence. Visual comparisons of the two approaches are made at the same mean squared error. In all cases, the images encoded using the PN transform method showed superior definition of detail and less geometrical distortion at transform block boundaries than the images encoded using the discrete cosine method. The results of this experiment suggest that image appearance may be improved by designing transform coefficient quantization rules to approximate the effects of additive noise rather than to omit low energy image components, as dictated by conventional rate-distortion theory.© (1977) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

2 citations