scispace - formally typeset
Search or ask a question

Showing papers on "Quantization (image processing) published in 1986"


Journal ArticleDOI
TL;DR: A novel progressive quantization scheme is developed for optimal progressive transmission of transformed diagnostic images that delivers intermediately reconstructed images of comparable quality twice as fast as the more usual zig-zag sampled approach.
Abstract: In radiology, as a result of the increased utilization of digital imaging modalities, such as computed tomography (CT) and magnetic resonance imaging (MRI), over a third of the images produced in a typical radiology department are currently in digital form, and this percentage is steadily increasing. Image compression provides a means for the economical storage and efficient transmission of these diagnostic pictures. The level of coding distortion that can be accepted for clinical diagnosis purposes is not yet well-defined. In this paper we introduce some constraints on the design of existing transform codes in order to achieve progressive image transmission efficiently. The design constraints allow the image quality to be asymptotically improved such that the proper clinical diagnoses are always possible. The modified transform code outperforms simple spatial-domain codes by providing higher quality of the intermediately reconstructed images. The improvement is 10 dB for a compression factor of 256:1, and it is as high as 17.5 dB for a factor of 8:1. A novel progressive quantization scheme is developed for optimal progressive transmission of transformed diagnostic images. Combined with a discrete cosine transform, the new approach delivers intermediately reconstructed images of comparable quality twice as fast as the more usual zig-zag sampled approach. The quantization procedure is suitable for hardware implementation.

34 citations


Patent
Asao Saito1, Kunio Watanabe1
02 Jan 1986
TL;DR: In this paper, a method for forming images with a gradation characteristic based on the arrangement of image forming elements is proposed, which involves modulating an arrangement pitch of the image forming element in at least one arrangement direction without quantization in accordance with the gradation level.
Abstract: A method for forming images with a gradation characteristic based on the arrangement of image forming elements comprises modulating an arrangement pitch of the image forming elements in at least one arrangement direction without quantization in accordance with the gradation level.

21 citations


Proceedings ArticleDOI
12 Jun 1986
TL;DR: Vector Quantization has only recently been applied to image data compression, but shows promise of outperforming more traditional transform coding methods, especially at high compression.
Abstract: This paper addresses the problem of data compression of medical imagery such as X-rays, Computer Tomography, Magnetic Resonance, Nuclear Medicine and Ultrasound. The Discrete Cosine Transform (DCT) has been extensively studied for image data compression, and good compression has been obtained without unduly sacrificing image quality. Vector Quantization has only recently been applied to image data compression, but shows promise of outperforming more traditional transform coding methods, especially at high compression. Vector Quantization is quite well suited for those applications where the images to be processed are very much alike, or can be grouped into a small number of classifications. These and similar studies continue to suffer from the lack of a uniformly agreed upon measure of image quality. This is also exacerbated by the large variety of electronic displays and viewing conditions.

9 citations


Proceedings ArticleDOI
Narciso Garcia1, C. Munoz, Alberto Sanz
01 May 1986
TL;DR: Hierarchical encoding, initially developed for image decomposition, is a feasible alternative for image transmission and storage and several independent compression strategies can be implemented, and, therefore, applied at the same time.
Abstract: Hierarchical encoding, initially developed for image decomposition, is a feasible alternative for image transmission and storage. Several independent compression strategies can be implemented, and, therefore, applied at the same time. Lossless encoding o Universal statistical compression on the hierarchical code. A unique Huffman code, valid for every hierarchical transform is built. Lossy encoding o Improvement of the intermediate approximations, as this can decrease the effective bit rate for transmission applications. Interpolating schemes and non-uniform spatial out-growth help solve this problem. o Prediction strategies on the hierarchical code. A three-dimensional predictor (space and hierarchy) on the code-pyramid reduces the information required to build new layers. o Early branch ending. Analysis of image homogeneities detects areas of similar values that can be approximated by a unique value.

6 citations


Proceedings ArticleDOI
20 Nov 1986
TL;DR: A two-stage transform coding scheme to reduce discontinuities between subimages in low-bit rate applications and the preliminary simulation results obtained from the proposed scheme and the traditional method are compared.
Abstract: With the advancement in computational speed, transform coding has been a promising technique in image data compression. Traditionally, an image is divided into exclusive rectangular blocks or subimages. Each subimage is a partial scene of the original image and they are processed independently. In low-bit rate applications, block boundary can develop due to discontinuities between the subimages. A two-stage transform coding scheme to reduce such effect is proposed. The first stage transformation is applied to the subimages, each being a reduced under-sampled image of the original. The second stage transformation is applied to the transform coefficients obtained at the first stage. A simple coder with discrete Walsh-Hadamard transform and uniform quantization is used to compare the preliminary simulation results obtained from the proposed scheme and the traditional method.

4 citations


Proceedings ArticleDOI
20 Nov 1986
TL;DR: An architecture to overcome the computational complexity of this new type of codec as well as other implementation related issues such as the quantization effects will be discussed.
Abstract: Projection onto convex sets iteration based image coding, where efficiently encodable sets are used to describe an image, is a recent approach to image compression. The technique allows the use of a variety of sets to encode an image. The focus of this paper, however, will be on two particular sets: the set of images whose cosine transform is known for certain frequencies and the set of images which are nonzero over a specified region. These sets can be used to encode interframe difference pictures of video teleconferencing images. A drawback of this new type of codec is its computational complexity. In this paper, an architecture to overcome this drawback as well as other implementation related issues such as the quantization effects will be discussed.

1 citations


01 Jan 1986
TL;DR: A review of various VQ algorithms and their respective design considerations as applied to color images is given, and a modified mean-residual vector quantizer using the LBG design algorithm with color signal preprocessing is described.
Abstract: Vector quantization (VQ) has recently emerged as a powerful and efficient technique for digital speech and image coding. The goal of such a process is data compression: to minimize communication channel capacity or digital storage memory requirements while maintaining an acceptable fidelity level of the data. A review of various VQ algorithms and their respective design considerations as applied to color images is given. Fidelity measurements and signal-to-noise ratio calculations are discussed. A modified mean-residual vector quantizer using the LBG design algorithm with color signal preprocessing is described. The algorithm is developed to yield a bit rate of 0.709 bits per pixel per color with the goal of easy implementation even using a simple

1 citations


Book ChapterDOI
01 Jan 1986
TL;DR: Improvements in experimental strategy have ameliorated the difficulties of direct acquisition of digital representations of stigmatic SIMS images to some extent, but the acquisition of copious amounts of data remains problematic.
Abstract: In recent years, several schemes have been proposed for direct acquisition of digital representations of stigmatic SIMS images [1–3]. These approaches suffer some common drawbacks. In order to construct a reasonable representation of an ion image, a considerable amount of raw data (ca. 100 Kbyte/image) must be manipulated. In the systems described by [1,2] data cannot be acquired from the ion microscope and written to random access memory at the same time. Consequently, some data are missed while the record of previous data is being written. This problem becomes worse as spatial quantization (i.e. more image pixels) of the image increases. Additionally, in applications such as image depth profiling [4], investigations of differential sputtering by construction of “burn through” maps [5,6] and image integration [7], many images must be recorded. The sheer volume of digital data can rapidly outstrip the capacity of contemporary random access mass storage units. Furthermore, on-line data acquisition puts considerable pressure on the operator to make decisions about data recording during an experimental run. This can be difficult in some cases, such as those involving high sputter rates and rapid image evolution. As the sampling in SIMS is destructive, it is crucial to record data in a “one pass” mode, making these decisions even more critical. Recently, improvements in experimental strategy have ameliorated these difficulties to some extent [8], but the acquisition of copious amounts of data remains problematic.