scispace - formally typeset
Search or ask a question

Showing papers on "Quantization (image processing) published in 1976"


Proceedings ArticleDOI
Paul G. Roetling1
09 Jul 1976
TL;DR: In this paper, it was shown that the visual MTF can be interpreted in terms of perceptible levels as a function of spatial frequency and that the total information perceived by the eye is much less than 8 bits times the number of pixels.
Abstract: Sample spacing and quantization levels are usually chosen for digitizing images such that the eye should not see degradations due to either process. Sample spacing is chosen based on the resolution (or high frequency) limit of the eye and quantization is based on perception of low contrast differences at lower frequencies. This process results in about 8 bit/pixel, 20 pixel/mm digitization, but, being based on two different visual limits, the total number of bits is an overestimate of the information perceived by the eye. The visual MTF can be interpreted in terms of perceptible levels as a function of spatial frequency. We show by this interpretation that the total information perceived by the eye is much less than 8 bits times the number of pixels. We consider the classic halftone as an image coding process, yielding 1 bit/ pixel. This approach indicates that halftones approximate the proper distribution of levels as a function of spatial frequency; therefore we have a possible explanation of why halftone images retain most of the visual quality of the original.© (1976) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

28 citations


Proceedings ArticleDOI
13 Dec 1976
TL;DR: In this paper, an approach is proposed that utilizes modern adaptive estimation and identification theory techniques to learn the picture statistics in real time so that an optimal set of coefficients can be identified as the signal statistics change.
Abstract: Historically, the data compression techniques utilized to process image data have been Unitary Transform encoding or time domain encoding. Recently, these two approaches have been combined into a hybrid transform domain time domain system. The hybrid system incorporates some of the advantages of both concepts and eliminates some of the disadvantages of each. However, the problems of picture statistics dependence and error propagation still exist. This is due to the fact that the transformed coefficients are non-stationary processes, which implies that a constant DPCM coefficient set cannot be optimal for all scenes. In this paper, an approach is suggested that has the potential of eliminating or greatly alleviating these problems. The approach utilizes modern adaptive estimation and identification theory techniques to "learn" the picture statistics in real time so that an optimal set of coefficients can be identified as the signal statistics change. In this way, the dependency of the system on the picture statistics is greatly reduced. Furthermore, by updating and transmitting a new set of predictor coefficients periodically, the channel error propagation problem is alleviated.

5 citations


Proceedings ArticleDOI
13 Dec 1976
TL;DR: Three modifications of the Constant Area Quantization (CAQ) image bandwidth compression technique have been developed and tested in order to broaden the range of compression ratios obtainable with acceptable image quality.
Abstract: Three modifications of the Constant Area Quantization (CAQ) image bandwidth compression technique have been developed and tested in order to broaden the range of compression ratios obtainable with acceptable image quality. The first modification involved the introduction of an adaptive area threshold, the second was a two-threshold algorithm and the third was a hybrid of the CAQ with a Hadamard transform technique. Using these three algorithms together with the basic CAQ, images spanning the range from 0. 2 to 2 bits per picture element were obtained from an 8 bit original.© (1976) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

3 citations


01 Jan 1976
TL;DR: In this article, the radar return of reflectors whose average intensity matched that of the picture elements in the scene has been simulated, and the returns were processed in three ways: normally or with no quantization, with a procedure simulation IF hard limiting, and with simulation simulating video (baseband) hard limiting.
Abstract: Starting with a magnetic tape of a scene viewed by the Landsat satellite, the radar return of reflectors whose average intensity matched that of the picture elements in the scene has been simulated. The returns were processed in three ways: normally or with no quantization, with a procedure simulation IF hard limiting, and with a procedure simulating video (baseband) hard limiting. For each type of processing an image for a one, two, and four-look system has been developed. It was found that IF limiting is slightly better than video limiting, while both can be reasonable trade-offs of image quality for reduced data rates when the number of looks is four or less. These conclusions are supported by photographs representing the different processing techniques.

S. Knauer1
01 Jan 1976
TL;DR: In this paper, the authors investigated techniques of video compression applicable to remotely piloted vehicles (RPVs) by reducing the frame rate and reducing the number of bits per sample needed to represent static picture detail by means of digital video compression.
Abstract: Techniques of video compression applicable to remotely piloted vehicles (RPVs) are investigated One approach is to reduce the frame rate, the other is to reduce the number of bits per sample needed to represent static picture detail by means of digital video compression Hadamard transforms of 8 x 8 subpictures, with adaptive and nonadaptive quantization of transform coefficients, were investigated for the latter technique Tapes of typical RPV video, processed by Aeronautronic Ford to simulate four frame rates, were again processed by the Ames real-time video system to obtain a variety of compressions of each of the four frame rates