scispace - formally typeset
Search or ask a question

Showing papers on "Quantization (image processing) published in 1987"


Journal ArticleDOI
TL;DR: A novel for-mulation of the state and state-transition rule that uses a perceptually based edge classifier is introduced and significant gains to be obtained are obtained by enhancing the basic VQ approach with interblock memory.
Abstract: Image compression using memoryless vector quantization (VQ), in which small blocks (vectors) of pixels are independently encoded, has been demonstrated to be an effective technique for achieving bit rates above 0.6 bits per pixel (bpp). To maintain the same quality at lower rates, it is necessary to exploit spatial redundancy over a larger region of pixels than is possible with memoryless VQ. This can be achieved by incorporating memory of previously encoded blocks into the encoding of each successive input block. Finite-state vector quantization (FSVQ) employs a finite number of states, which summarize key information about previously encoded vectors, to select one of a family of codebooks to encode each input vector. In this paper, we review the basic ideas of VQ and extend the finite-state concept to image compression. We introduce a novel for-mulation of the state and state-transition rule that uses a perceptually based edge classifier. We also examine the use of interpolation in conjunction with VQ with finite memory. Coding results are presented for monochrome images in the bit-rate range of 0.24 to 0.32 bpp. The results achieved with finite memory are comparable to those of memoryless VQ at 0.6 bpp and show that there are significant gains to be obtained by enhancing the basic VQ approach with interblock memory.

108 citations


Journal ArticleDOI
TL;DR: A simple, but efficient, nearest neighbor search algorithm is proposed and simulation results demonstrating its effectiveness in the case of vector quantization for a given source are presented.
Abstract: A simple, but efficient, nearest neighbor search algorithm is proposed and simulation results demonstrating its effectiveness in the case of vector quantization for a given source are presented. The simulation results indicate that use of this approach reduces the number of multiplications and additions to as low as 9 percent of those required for the conventional full search method. The reduction in the number of subtractions is also considerable. The increase in the number of comparisons is moderate, and therefore, the total number of operations can be as low as 28 percent of those required by the full search method. An additional advantage of the described algorithm is the fact that it requires no precomputations and/or extra memory.

70 citations


Proceedings ArticleDOI
01 Apr 1987
TL;DR: A combination of a block-overlapping pyramidal transform with multistage VQ enables VQ of large blocks in a hierarchical manner with small computational costs, while the block- overlapping principle gives rise to a smooth image reconstruction.
Abstract: Vector quantization is a promising encoding technique especially for low data rate image transmission. Due to the exponential growing of its computational complexity with the block dimension, however, only small blocksizes have been used in practical applications. This restricts the coding efficiency and produces some blockiness in the reconstructed images. Our proposal solves this problems by a combination of a block-overlapping pyramidal transform with multistage VQ. This concept enables VQ of large blocks in a hierarchical manner with small computational costs, while the block-overlapping principle gives rise to a smooth image reconstruction. The simulation results proved that picture phone sequences are reconstructed without any annoying artifacts.

15 citations


Journal ArticleDOI
TL;DR: The past five years of progress in image processing technology related to radiography applications is reviewed and it is anticipated that the growth in this field will continue for many years to come.

14 citations


Journal ArticleDOI
TL;DR: An object dependent deterministic diffuser is presented to ensure high diffraction efficiency and small quantization errors in computer-generated holograms.

10 citations



Journal ArticleDOI
TL;DR: An analysis of dual-phase holograms is presented which concentrates on the effects of phase quantization on hologram design and it is shown that performance of the quantization schemes depends on the number of quantization levels available.
Abstract: An analysis of dual-phase holograms is presented which concentrates on the effects of phase quantization on hologram design. Two different quantization schemes are investigated and used in conjunction with an iterative design algorithm to reduce quantization error. It is shown that performance of the quantization schemes depends on the number of quantization levels available.

7 citations


Proceedings ArticleDOI
01 Jan 1987
TL;DR: The full-frame bit allocation algorithm for radiological image compression developed in the laboratory can achieve compression ratios as high as 30:1 and the flexibility inherent in the microcode extends the capabilities of the system to incorporate images of variable sizes.
Abstract: The full-frame bit allocation algorithm for radiological image compression developed in our laboratory can achieve compression ratios as high as 30:1. The software development and clinical evaluation of this algorithm has been completed. It involves two stages of operations: a two-dimensional discrete cosine transform and pixel quantization in the transform space with pixel depth kept accountable by a bit allocation table. The greatest engineering challenge in implementing a hardware version of the compression system lies in the fast cosine transform of 1Kx1K images. Our design took an expandable modular approach based on the VME bus system which has a maximum data transfer rate of 48 Mbytes per second and a Motorola 68020 microprocessor as the master controller. The transform modules are based on advanced digital signal processor (DSP) chips microprogrammed to perform fast cosine transforms. Four DSP's built into a single-board transform module can process an 1K x 1K image in 1.7 seconds. Additional transform modules working in parallel can be added if even greater speeds are desired. The flexibility inherent in the microcode extends the capabilities of the system to incorporate images of variable sizes. Our design allows fof a maximum image size of 2K x 2K.

7 citations


Proceedings ArticleDOI
13 Oct 1987
TL;DR: A novel method is described in this paper which alleviates edge degradation, the so-called "block effect", and shows that the edge features are well preserved upon reconstruction at the decoder.
Abstract: Vector quantization is a new coding technique which has enjoyed much success in its brief history. Its most attractive features are a high compression ratio and a simple decoder. Thus, it shows great promise in applications using a single-encoder with a multiple decoder such as videotext and archiving. However, some problems have arisen, most notably edge degradation, the so-called "block effect". A novel method is described in this paper which alleviated this problem without much increase in computational effort. A index has been devised, the activity index, based upon measurements on the input image data; it is used to classify image areas into two groups, active and nonactive. For nonactive areas, large block size is used, while for active areas the block size is small. Two codebooks are generated corresponding to each of the two groups of blocks formed. Using this adaptive vector quantization scheme the results obtained show that the edge features are well preserved upon reconstruction at the decoder.

5 citations


Proceedings ArticleDOI
13 Oct 1987
TL;DR: In this article, the linear and elliptical features embedded in the boundary image on which there are disconnectivities and the distortion that are caused either by occlusion or by noise are extracted.
Abstract: This paper presents a new method to obtain the features from intensity images by making use of an improved Hough Transform. Through image preprocessing, the intensity image can be converted to a boundary image. A new approach is proposed to find the linear and elliptical features embedded in the boundary image on which there are discon-nectivities and the distortion that are caused either by occlusion or by noise. The Hough transform method is improved to recognize those features with less computation effort and more accuracy.

4 citations


Proceedings ArticleDOI
01 Apr 1987
TL;DR: It is shown by examples that the number of quantization levels when the smeared image is digitized determines the quality of the restored image.
Abstract: Reconstruction from phase information via convex projections is considered for deblurring smeared images when the blurring function does not introduce any phase. The effects of initialization and constraints in the algorithm are investigated. It is shown by examples that the number of quantization levels when the smeared image is digitized determines the quality of the restored image.

Proceedings ArticleDOI
13 Oct 1987
TL;DR: A technique based on tree-searched mean residual vector quantization (MRVQ) for progressive compression and transmission of images and obtaining high quality images at 1.4 bits/pixel is developed.
Abstract: In this paper, we develop a technique based on tree-searched mean residual vector quantization (MRVQ) for progressive compression and transmission of images. In the first stage, averages over image subblocks of a certain size are transmitted. If the receiver decides to retain the image, the residual image generated by subtracting the block averages from the original is progressively transmitted using the tree-searched vector quantization (VQ) hierarchy. In an attempt to reduce the bit-rate of the initial transmission, Knowlton's scheme is used to transmit the block averages progressively. Using a (4x4) block size, we obtain high quality images at 1.4 bits/pixel.

Patent
19 May 1987
TL;DR: In this paper, the average value of a bit of density information at a small area consisting of the picture elements of four by four originally was calculated, and the process image of unmagnification can be obtained if the submatrix Si,j is also a child matrix of 4 by 4.
Abstract: PURPOSE:To obtain a process image of good quality and of no distortion of a dot image due to the variable power of an image by performing a binarization process which expresses a half tone based on a bit of digital picture information on which a multivalue quantization is performed at each picture element, and simultaneously the variable power process of the image. CONSTITUTION:One image is divided into small areas consisting of picture elements of four by four, and the average value Di,j of a bit of density informa tion at every small area is calculated. And the average value Di,j and a submatrix Si,j segmented from a threshold mother matrix M are compared, and a process data of one bit is decided. So that the average value Di,j of the bit of density information at the small area consists of the picture elements of four by four originally, the process image of unmagnification can be obtained if the submatrix Si,j is also a child matrix of four by four. At such a time, when the submatrix Si,j is constituted with the child matrix of three by three, the number of picture elements of the process image becomes 3/4 of the number of the picture elements of an original image, and a reduced image of 75% can be obtained.

Patent
26 Jun 1987
TL;DR: In this article, the authors proposed a method to reduce the number of hardwares and simplify a software by converting a multi-gradation picture signal to a picture signal in which a large number of binary picture signals are combined approximately, and using a standardized encoding system for the binary picture.
Abstract: PURPOSE:To reduce the number of hardwares, and to simplify a software by converting a multi-gradation picture signal to a picture signal in which a large number of binary picture signals are combined approximately, and using a standardized encoding system for the binary picture. CONSTITUTION:A difference (approximate error) D between a picture signal (x) which represents the luminance information against one picture element X of an input multi- gradation picture signal and a forecasting value x' calculated from a forecasting circuit 22 is calculated at an adder 21. The forecasting circuit 22 finds the forecasting value x' of the luminance information (x) and a change degree V in a luminance in the neighborhood of the picture element X, against the picture element X, by using the bits of reproducing luminance information (a)-(c) of three picture elements A-C positioned at a prescribed position relation. A quantization circuit 23 inputs the change degree V and the approximate error D, and outputs an approximate difference (d) according to a prescribed graph. A binarization circuit 24 converts the approximate difference (d) to states S0-S4 based on a prescribed table, and converts the original image of multi-gradation to a special picture signal S consisting of four binary pictures using the states S0-S4. An encoding based on an MR encoding, etc., is performed, and the reproducing luminance information x'' is calculated.