scispace - formally typeset
Search or ask a question

Showing papers on "Quantization (image processing) published in 1988"



Proceedings ArticleDOI
25 Oct 1988
TL;DR: This paper applies back propagation to a well understood problem in image analysis, i.e., bandwidth compression, and analyzes the internal representation developed by the network and finds that the learning algorithm produces a nearly linear transformation of a Principal Components Analysis of the image.
Abstract: The recent discovery of powerful learning algorithms for parallel distributed networks has made it possible to program computation in a new way, by example rather than algorithm. The back propagation algorithm is a gradient descent technique for training such networks. The problem posed to the researcher using such an algorithm is discovering how it did solve the problem if a solution is found. In this paper we apply back propagation to a well understood problem in image analysis, i.e., bandwidth compression, and analyze the internal representation developed by the network. The network used consists of nonlinear units that compute a sigmoidal function of their inputs. It is found that the learning algorithm produces a nearly linear transformation of a Principal Components Analysis of the image, and the units in the network tend to stay in the linear range of the sigmoid function. The particular transform found departs from the standard Principal Components solution in that near-equal variance of the coefficients results, depending on the encoding used. While the solution found is basically linear, such networks can also use the nonlinearity to solve encoding problems where the Principal Components solution is degenerate.

91 citations


Patent
24 Mar 1988
TL;DR: In this paper, a method and apparatus for encoding interframe error data in an image transmission system, and in particular in a motion compensated image transmission systems for transmitting a sequence of image frames from a transmitter to a receiver, employ hierarchial vector quantization and arithmetic coding to increase the data compression of the images being transmitted.
Abstract: A method and apparatus for encoding interframe error data in an image transmission system, and in particular in a motion compensated image transmission system for transmitting a sequence of image frames from a transmitter to a receiver, employ hierarchial vector quantization and arithmetic coding to increase the data compression of the images being transmitted. The method and apparatus decimate an interframe predicted image data and an uncoded current image data, and apply hierarchial vector quantization encoding to the resulting pyramid data structures. Lossy coding is applied on a level-by-level basis for generating the encoded data representation of the image difference between the predicted image data and the uncoded original image. The method and apparatus are applicable to systems transmitting a sequence of image frames both with and without motion compensation. The method and apparatus feature blurring those blocks of the predicted image data which fail to adequately represent the current image at a pyramid structural level and shifting block boundaries to increase the efficiency of the vector quantization coding mechanism. The method further features techniques when gain/shape vector quantization is employed for decreasing the data which must be sent to the receiver by varying the size of the shape code book as a function of the gain associated with the shape. Thresholding and the deletion of isolated blocks of data also decrease transmission requirements without objectionable loss of image quality.

76 citations


Proceedings ArticleDOI
25 Oct 1988
TL;DR: A novel technique for digital image halftoning is presented, performing nonstandard quantization subject to a fidelity criterion, using massively parallel artificial symmetric neural networks for this purpose.
Abstract: A novel technique for digital image halftoning is presented, performing nonstandard quantization subject to a fidelity criterion. Massively parallel artificial symmetric neural networks are used for this purpose, minimizing a frequency weighted mean squared error between the continuous-tone input and the bilevel output image. The weights of these networks can be selected, so that the generated halftoned images are of good quality. A symmetric formulation of the error diffusion halftoning technique is also presented in the form of a massively parallel network. This network contains a nonmonotonic nonlinearity in lieu of the sigmoid function and is shown to be appropriate for effective halftoning of images.

31 citations



Proceedings ArticleDOI
Nobukazu Doi1, H. Hanyu, Morishi Izumita, Seiichi Mita, Y. Eto, H. Imai 
28 Nov 1988
TL;DR: Computer simulation showed that FADCT coding improves the SNR (signal/noise ratio) from 1 dB to 3 dB as compared to nonadaptive DCT coding, and a high-quality image with SNR over 40 dB can be obtained at 3 bits/pixel.
Abstract: The authors investigate fixed adaptive discrete cosine transform (FADCT) coding for use in digital VTRs (video tape recorders). The image is divided into 8*8-pixel subblocks, and a two-dimensional DCT is performed on each of them. The quantization scheme is adapted to input image data, thereby increasing coding efficiency. The scheme is designed, however, to keep the output data rate fixed. Computer simulation showed that FADCT coding improves the SNR (signal/noise ratio) from 1 dB to 3 dB as compared to nonadaptive DCT coding, and a high-quality image with SNR over 40 dB can be obtained at 3 bits/pixel. Reallocations of the extra bits from the higher to the lower AC energy subblocks allow the application of adaptive coding to the digital VTR, with a high tolerance for channel errors. >

11 citations


Patent
Nishihara Toyotaro1
25 Aug 1988
TL;DR: In this paper, the difference quantization value of a luminance component or two color difference components at the interval of one picture element for the luminance components and at the intervals of two picture elements for two colour difference components respectively is used to display a low resolution display.
Abstract: PURPOSE:To display even the screen low in resolution to the same size as a standard screen by writing the difference quantization value of a luminance component or two color difference components at the interval of one picture element for the luminance component and at the interval of two picture elements for two color difference components respectively. CONSTITUTION:When an image number to be displayed is inputted from a keyboard 7, the contents of a display RAM 6 are cleared. Next, in accordance with a control code concerning an image signal, an address converting circuit 11 is set. At such a time, when the image signal is a standard resolution, the address converting circuit 11 is set so that the address generated by DMAC (direct memory access controller) 6 is recorded in the display RAM 8 as it is and when the image signal is a low resolution, the circuit 11 is set so that the address generated by the DMAC 6 is converted and recorded to the display RAM. Next, the image signal of an optical disk is read, transferred to the display RAM 8 and recorded. At such a time, the data are written at intervals of one picture element for the luminance component and the data are written at intervals of two picture elements for the color difference component.

9 citations


Proceedings ArticleDOI
25 Oct 1988
TL;DR: A technique which combines the advantages of VQUC and VQIAC is presented and it is demonstrated that the technique gives a coding performance close to that obtained with image adaptive VQ at a substantially reduced computational complextiy.
Abstract: In this paper, we present two algorithms for vector quantization of images and an architecture to implement these algorithms. In vector quantization (VQ), the image vectors are usually coded with an "universal" codebook, however, for a given image, only a subset of the codewords in the universal codebook may be needed. This means that effectively a smaller label size can be employed at the expense of a small overhead information to indicate to the receiver the codewords used. Simulation results demonstrate the superior coding performance of this technique. VQ using an universal codebook (VQUC) is computationally less demanding but its performance is poor for images outside the training sequence. Image adaptive techniques, where new codebooks are generated, for each input image (VQIAC) can improve the performance but at the cost of increased computational complexity. A technique which combines the advantages of VQUC and VQIAC is presented in this paper. Simulation results demonstrate that the technique gives a coding performance close to that obtained with image adaptive VQ at a substantially reduced computational complextiy. A systolic array architecture to implement the algorithms in real-time is also presented. The regular and iterable structure makes possible the VLSI implementation of the architecture.

9 citations


Proceedings ArticleDOI
27 Jun 1988
TL;DR: This paper describes an efficient compression method that can compress images with small bit rates and generate reconstructed images of high quality and is effective for medical images.
Abstract: This paper describes an efficient compression method for medical images. This method can compress images with small bit rates and generate reconstructed images of high quality. Discrete Cosine Transform (DCT) and Block Truncation Coding (BTC) are widely used in various fields. These methods, however, have the following problems. With the DCT method, the reconstructed image quality is quite good, except that the quality of sharp edges in the image is clearly inferior. The BTC method can reconstruct sharp edges, but the reconstruced image is not suitable for medical images. To solve these problems, we have proposed a new hybrid compression method. In this method an original image is divided into sub-images. The high frequency component of each sub-image is evaluated. If the component is small, DCT is performed on the sub-image. When the high frequency component is large, BTC is applied to the sub-image itself, then DCT is performed on the difference between the original and the reconstructed sub-images. In our experiments, this method proved to be effective.

8 citations


Proceedings ArticleDOI
11 Apr 1988
TL;DR: Parameters of the signatures were extracted using a data-dependent clustering algorithm designed to be insensitive to quantization and feature parameterization, and Quantization in Hough transform computation is shown to determine the sensitivity of the calculation to noisy values.
Abstract: Doppler-time images (DTIs) of a rotating object were formed from the returns of high resolution radars by expressing the Doppler values as a function of time over a span of ranges. Returns from scatterers on the object yielded a characteristic signature of the object's rotation: a section of a sinusoid centered about the negatively sloped zero-crossing. As several scatterers are usually present, these signatures may overlap. The return from each scatterer is noisy and, more importantly, may be missing (falling below the detection threshold) or may be multi-valued (yielding several values exceeding the detection threshold). The Hough transform-the extraction of parametrically expressible features-was used to extract straight line approximations to the return signatures. This algorithm is shown to be insensitive to missing or extraneous values. Quantization in Hough transform computation is shown to determine the sensitivity of the calculation to noisy values. Parameters of the signatures were extracted using a data-dependent clustering algorithm designed to be insensitive to quantization and feature parameterization. >

7 citations


Proceedings ArticleDOI
25 Oct 1988
TL;DR: The complexity of the currently developed system is discussed for the operation over a 64 kbit/sec communication channel and the specific and primary considerations include the following.
Abstract: The transmission of video information through a narrow band channel requires a significant degree of image compression. In addition to the actual coding procedure, the frame rate is reduced, the spatial sample rate is lowered and the color components are further filtered. Via the review of a practical and flexible image compression configuration, the important technical aspects of videocoding is presented. The complexity of the currently developed system is discussed for the operation over a 64 kbit/sec communication channel. The specific and primary considerations include the following: After a reduction of temporal and spatial resolution, the luminance and color difference signals are divided in pixel blocks of 16 x 16 and 8 x 8, respectively. Then, they are passed on to a motion compensated DPCM coding system with a subsequent discrete cosine transform of the prediction error. The appropriate quantization is performed via several Huffman tables.

Patent
01 Jul 1988
TL;DR: In this article, the authors propose to interpolate between the picture element of a predicted origin and that of a time difference value at the time of expanding and interpolating a data set.
Abstract: PURPOSE:To improve picture quality by restoring a contour with fidelity and to heighten compressibility, by interpolating between the picture element of a predicted origin and that of a time difference value at the time of expanding and interpolating a data. CONSTITUTION:At a transmission side, a first-order difference between lines is compressed 3 by the variable sample density encoding of a second-order difference in a line direction. At a reception side, the first-order difference between the lines is restored at a variable sample density expansion circuit 8 by a quantization characteristic and an interpolation table, and is stored in a line buffer 9, and it is added 11 on the restored value of a preceding line in a line buffer 12, then, the restored value of a current line can be obtained. Therefore, in a case that an amplitude difference value of a quantization point in which the amplitude difference value does not arrive at a minimum value except (0) with the same time difference value is interpolated with a ratio of 1/2 at every reduction by (1) of the sample interval, the quantization point is set so that an interpolation value becomes smaller than the minimum amplitude difference value with the time difference value at the time of reducing the sample interval by (1), and the quantization point in a section smaller in the time difference value is interpolated.

Proceedings ArticleDOI
12 Jun 1988
TL;DR: The results obtained have shown that the edge features are well-preserved on image reconstruction, while the computational effort has been significantly reduced.
Abstract: A method that addresses the problem of edge degradation in adaptive vector quantization is described. An index, the activity index A, based upon measurements of the input image data has been devised. This index is used to classify image areas into two groups, active and nonactive, according to whether A>T or A >

Proceedings ArticleDOI
25 Oct 1988
TL;DR: A model is presented which describes and codes the regions within a segmented image by means of quad-tree analysis of each region, by preserving textures at high compression by using dynamic entropy coding.
Abstract: A model is presented which describes and codes the regions within a segmented image by means of quad-tree analysis of each region. A simple criterion is used to determine the importance of any node in the tree as a function of its perceptual relevance. This approach is an improvement on the parametric model which is frequently applied, by preserving textures at high compression. Chromaticity quantisation involves supplying only an average in each region. The additional use of dynamic entropy coding further improves the compression without additional distortion. In this way, compression to 0.5 bits/pel is achievable for colour images.© (1988) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
27 Jun 1988
TL;DR: It is concluded that the compression technique shows promise for use on CT images, as a first example of a successful procedure to evaluate image processing techniques.
Abstract: This paper describes the evaluation of a new image compression technique as a first example of a successful procedure to evaluate image processing techniques. Forty-eight CT images of the abdomen were compressed and then reconstructed by a method called sub-band coding using vector quantization. The compression ratios used were 16:1 and 19:1. Five radiologists participated in this study: two to select the images and help in determining the appropriate training set for development of the code-book for the algorithm, and three to participate in the actual experiment. Data were taken to determine diagnostic certainty, ratings of the visibility of diagnostically important structures in the image, and accuracy of identifying which images were the originals and which were not. Results from this study demonstrate: (1) The areas under the ROC curve were virtually identical for all three sets of images (AUC's of 0.78, 0.77, 0.78 for original, 16:1, and 19:1 respectively) and these were not statistically different; (2) all diagnostically relevant structures in the compressed images were maintained by the compression technique; and (3) half of the 16:1 compressed images were identified as original images and 3 out of 48 of the 19:1. The authors conclude that the compression technique shows promise for use on CT images.

Journal ArticleDOI
TL;DR: The explicit introduction of phase information in implicit passive binarization procedures is demonstrated and a doubling of the number of quantization levels without changing other parameters is demonstrated.

Proceedings ArticleDOI
08 Aug 1988
TL;DR: By creating the pixel- point models of rotating target image and the linear and adaptive filtering algorithms, the author posed a method used to omit the image rotation component for the real-time video tr acking system and gave its mathematical proof.
Abstract: By means of creating the pixel- point models of rotating target image and the linear and adaptive filtering algorithms, the author posed a method used to omit the image rotation component for the real-time video tr acking system and gave its mathematical proof.

Book ChapterDOI
01 Jun 1988
TL;DR: With the increased use of digital images in nondestructive evaluation (NDE) of materials, effective image compression and coding techniques are now much needed in NDE.
Abstract: With the increased use of digital images in nondestructive evaluation (NDE) of materials, effective image compression and coding techniques are now much needed in NDE.

Proceedings ArticleDOI
24 Oct 1988
TL;DR: In this article, a CCD linear array based light monitoring system was proposed to provide the information about the light distribution of the image and several possible control schemes upon the output of the monitoring system are discussed.
Abstract: This paper discusses some accuracy related problems of a vision inspection system which were studied in developing such a system for a drill inspection. A structured illumination device was designed to obtain high quality image of a drill. The errors in image conversion was studied and it was found that the variation of the illumination intensity changed the dimension of the parts due to the quantization effect of the digital computer. A CCD linear array based light monitoring system was proposed to provide the information about the light distribution of the image. Several possible control schemes upon the output of the monitoring system are discussed.

Patent
24 Mar 1988
TL;DR: In this paper, a method and apparatus for encoding interframe error data in an image transmission system, and in particular in a motion compensated image transmission systems for transmitting a sequence of image frames from a transmitter to a receiver, employ hierarchical vector quantization and arithmetic coding to increase the data compression of the images being transmitted.
Abstract: A method and apparatus for encoding interframe error data in an image transmission system, and in particular in a motion compensated image transmission system for transmitting a sequence of image frames from a transmitter to a receiver, employ hierarchical vector quantization and arithmetic coding to increase the data compression of the images being transmitted. The method and apparatus decimate an interframe predicted image data and an uncoded current image data, and apply hierarchical vector quantization encoding to the resulting pyramid data structures. Lossy coding is applied on a level-by-level basis for generating the encoded data representation of the image difference between the predicted image data and the uncoded original image. The method and apparatus are applicable to systems transmitting a sequence of image frames both with and without motion compensation. The method and apparatus feature blurring those blocks of the predicted image data which fail to adequately represent the current image at a pyramid structural level and shifting block boundaries to increase the efficiency of the vector quantization coding mechanism. The method further features techniques when gain/shape vector quantization is employed for decreasing the data which must be sent to the receiver by varying the size of the shape code book as a function of the gain associated with the shape. Thresholding and the deletion of isolated blocks of data also decrease transmission requirements without objectionable loss of image quality.

Patent
31 Aug 1988
TL;DR: In this article, a machine vision system to be processed workpiece is picked up by a camera and the image increments are specifically selected, in which sufficient contrast between the workpiece and the background, in order to determine the quantization threshold.
Abstract: In a machine vision system to be processed workpiece is picked up by a camera. In the evaluation must be able to distinguish between the work and its background. Under practical working conditions is not always ensured that there is sufficient contrast between the workpiece and the background. Furthermore, in practice the choice of quantization to distinguish between light and dark difficulties. In the new image processing system is sufficient contrast between the workpiece and background should always be present and the choice of quantization cause no problems. For this purpose, image increments are specifically selected, in which sufficient contrast between the workpiece and the background, only processes the Bildinkrementen these associated information and automatically determines the quantization threshold by evaluating the Bildinkrementen the associated information. Image processing in the industry, in particular for centering, positioning, control and measuring of workpieces in the context of a manufacturing process; Monitoring of transport processes.