scispace - formally typeset
Search or ask a question

Showing papers on "Quantization (image processing) published in 1985"


Journal ArticleDOI
TL;DR: A one-dimensional (1-D) approach to the problem of adaptive image restoration that uses a cascade of four 1-D adaptive filters oriented in the four major correlation directions of the image to illustrate its main advantage, namely its capability to preserve edges in the image while removing noise in all regions of theimage, including the edge regions.
Abstract: A one-dimensional (1-D) approach to the problem of adaptive image restoration is presented. In this approach, we use a cascade of four 1-D adaptive filters oriented in the four major correlation directions of the image, with each filter treating the image as a 1-D signal. The objective of developing restoration techniques using this approach is to improve the performance of two-dimensional (2-D) techniques for image restoration. This differs considerably from previous 1-D approaches, the objectives of which have typically been to approximate a 2-D approach for computational reasons and not to improve its performance. The approach is applied to existing 2-D image restoration algorithms to illustrate its main advantage, namely its capability to preserve edges in the image while removing noise in all regions of the image, including the edge regions. Experimental results with images degraded by additive white noise at various SNR's (signal-to-noise ratios) are presented.

59 citations


Journal ArticleDOI
TL;DR: Results are presented which show LMS may provide almost 2 bits per symbol reduction in transmitted bit rate compared to DPCM when distortion levels are approximately the same for both methods.
Abstract: The LMS algorithm may be used to adapt the coefficients of an adaptive prediction filter for image source encoding. Results are presented which show LMS may provide almost 2 bits per symbol reduction in transmitted bit rate compared to DPCM when distortion levels are approximately the same for both methods. Alternatively, LMS can be used in fixed bit-rate environments to decrease the reconstructed image distortion. When compared with fixed coefficient DPCM, reconstructed image distortion is reduced by as much as 6-7 dB using LMS. Last, pictorial results representative of LMS processing are presented.

40 citations


Proceedings ArticleDOI
05 Apr 1985
TL;DR: In this article, a syntactical method for removing noise artifacts from image skeletons and inferring a unique structure is demonstrated, and conditions under which image processing functions are served best by the LISP environment are discussed.
Abstract: Thinning algorithms, such as the prairie fire or Medial Axis Transformation (MAT) algorithm, are used to extract structure-preserving networks or skeletons from segmented imagery. This is a useful function in image understanding applications where syntactical representation of object shapes is desired. The MAT has several shortcomings, however. The MAT skeletons thinned from two similar shapes may be structurally different due to the introduction of random noise into the segmentation process. This noise may exist as random "holes" within the segmented shape, as minor contour variations, or as spatial quantization effects. This problem is often solved by filtering the image prior to segmentation or thinning, but fine detail may be lost as a result. A syntactical method of removing these noise artifacts from image skeletons and of inferring a unique structure is demonstrated. The algorithms for performing this syntactical processing are coded in LISP. Conditions under which image processing functions are served best by the LISP environment are discussed. Image enhancement and noise are discussed in terms that embrace statistical and syntactical methods of image processing.

3 citations


Dissertation
01 Jan 1985
TL;DR: The performance of transform image coding schemes can be improved substantially by adapting to changes in image statistics through adaptation of the transform, bit allocation, and/or quantization parameters according to time-varying image statistics.
Abstract: The performance of transform image coding schemes can be improved substantially by adapting to changes in image statistics. Essentially, this is accomplished through adaptation of the transform, bit allocation, and/or quantization parameters according to time-varying image statistics. Additionally adaptation can be used to achieve transmission rate reduction whilst maintaining a given picture quality. [Continues.]

3 citations


Proceedings ArticleDOI
A. Kundu1
06 Nov 1985
TL;DR: This method uses one condi t ion of o p t i m a l i t y of t he Lloyd-Max algor i thm and computes the threshold i t e r a t i v e l y to see the similarity between the threshold s e l e c t i n problem and the optimal quant izer design problem.
Abstract: S t a t e Univers i t of New York a t Buffalo 201 B e l l H a l l Buffalo, NY 14260 The techniques descr ibed i n [2 , 3, 41 can be exterded t o m u l t i l e v e l th reshold ing , bu t with considerable d i f f i c u l t y . I n addi t ion , a l l these techniques are g l o b a l i n na ture . A s a r e s u l t , t h e s e methods f a i l t o provide any information regarding the objec t d e t a i l . Local th reshold ing techniques ( f o r a review see [ 5 ] ) have been proposed i n which threshold a t a poin t depends on the l o c a l p r o p e r t i e s of t he p i x e l o r i t s p o s i t i o n . A new approach us ing a f a s t search method h a s been proposed by Reddy e t a1 [ 6 ] . This method uses one condi t ion of o p t i m a l i t y of t he Lloyd-Max algor i thm ( the re a r e two condi t ions i n the Lloyd-Max algori thm) and computes the threshold i t e r a t i v e l y . Bu t t h e i r a lgori thm f a i l s to see the similarity between the threshold s e l e c t i o n problem and the optimal quant izer design problem, and s t i l l r e q u i r e s histogram computation. ABSTRACT

1 citations


Proceedings ArticleDOI
Firooz A. Sadjadi1
11 Dec 1985
TL;DR: A novel hierarchical technique is presented for the extraction of planar surfaces which has the potential of being both efficient and robust in the presence of noise.
Abstract: The extraction of planar surfaces in range imagery is essential in many object recognition schemes where 3D objects are assumed to be composed mainly of such surfaces. In this paper a novel hierarchical technique is presented for the extraction of planar surfaces which has the potential of being both efficient and robust in the presence of noise. The planar surfaces are extracted by means of an hierarchical 3D Hough transform. The levels of quantization of the 3D Hough space vary from coarse to fine depending on their position in the hierarchy. The technique has many advantages in terms of efficiency, storage requirements, and immunity to noise.

1 citations


Proceedings ArticleDOI
01 Apr 1985
TL;DR: A hybrid algorithm is developed which combines a one-dimensional cosine transformation and aOne-dimensional prediction and is applied to a real image to check its validity.
Abstract: This paper presents a technique for data compression of a grey-level still image. The image is considered to be a two-dimensional homogeneous random field that satisfies a noncausal stochastic difference equation driven by a white noise field. The difference equation model is isotropic and characterized by the two parameters of the variance and the correlation length of an image only. This model is simple and can easily fit various types of images. Based on the above model, we develop a hybrid algorithm which combines a one-dimensional cosine transformation and a one-dimensional prediction. The algorithm is applied to a real image to check its validity.

Proceedings ArticleDOI
11 Jul 1985
TL;DR: A new approach to the quantization of discrete cosine transformed subimage data is discussed and it is shown that physical modeling of the sensor in combination with a power spectrum model of the scene leads to a direct means of generating the bit and variance maps necessary for coefficient quantization.
Abstract: A new approach to the quantization of discrete cosine transformed subimage data is discussed. It is shown that physical modeling of the sensor in combination with a power spectrum model of the scene leads to a direct means of generating the bit and variance maps necessary for coefficient quantization. Preliminary results indicate that good image quality can be maintained down to 1/4 bit-per-pixel, depending upon the Optical Transfer Function (OTF) and scene information content involved. A unique feature of this approach is that algorithm training is unnecessary since the bit and variance maps are computed directly from subimage data.