scispace - formally typeset
Search or ask a question

Showing papers on "Quantization (image processing) published in 1996"


Proceedings ArticleDOI
16 Sep 1996
TL;DR: The watermark can be constructed to make counterfeiting almost impossible, and the same digital watermarking algorithm can be applied to all three media under consideration with only minor modifications, making it especially appropriate for multimedia products.
Abstract: We describe a digital watermarking method for use in audio, image, video and multimedia data. We argue that a watermark must be placed in perceptually significant components of a signal if it is to be robust to common signal distortions and malicious attack. However, it is well known that modification of these components can lead to perceptual degradation of the signal. To avoid this, we propose to insert a watermark into the spectral components of the data using techniques analogous to spread spectrum communications, hiding a narrow band signal in a wideband channel that is the data. The watermark is difficult for an attacker to remove, even when several individuals conspire together with independently watermarked copies of the data. It is also robust to common signal and geometric distortions such as digital-to-analog and analog-to-digital conversion, resampling, quantization, dithering, compression, rotation, translation, cropping and scaling. The same digital watermarking algorithm can be applied to all three media under consideration with only minor modifications, making it especially appropriate for multimedia products. Retrieval of the watermark unambiguously identifies the owner, and the watermark can be constructed to make counterfeiting almost impossible. We present experimental results to support these claims.

590 citations


Patent
20 Dec 1996
TL;DR: In this article, the rotational vectors calculated using a real-valued centroid are used to segment the hand region independently of pixel quantization, and color segmentation is used to identify hand-color regions, followed by region labeling to filter out noise regions based on region size.
Abstract: Noise problems in processing small images or large-granularity images are reduced by representing hand images as rotational vectors calculated using a real-valued centroid. The hand region is therefore sectored independently of pixel quantization. Color segmentation is used to identify hand-color regions, followed by region labelling to filter out noise regions based on region size. Principal component analysis is used to plot gesture models.

557 citations


Journal ArticleDOI
Wei Ding1, Bede Liu1
TL;DR: A feedback re-encoding method with a rate-quantization model, which can be adapted to changes in picture activities, is developed and used for quantization parameter selection at the frame and slice level.
Abstract: For MPEG video coding and recording applications, it is important to select the quantization parameters at slice and macroblock levels to produce consistent quality image for a given bit budget. A well-designed rate control strategy can improve the overall image quality for video transmission over a constant-bit-rate channel and fulfil the editing requirement of video recording, where a certain number of new pictures are encoded to replace consecutive frames on the storage media using, at most, the same number of bits. We developed a feedback re-encoding method with a rate-quantization model, which can be adapted to changes in picture activities. The model is used for quantization parameter selection at the frame and slice level. The extra computations needed are modest. Experiments show the accuracy of the model and the effectiveness of the proposed rate control method. A new bit allocation algorithm is then proposed for MPEG video coding.

377 citations


Journal ArticleDOI
TL;DR: This work addresses the problem of retrieving images from a large database using an image as a query, specifically aimed at databases that store images in JPEG format, and works in the compressed domain to create index keys.
Abstract: We address the problem of retrieving images from a large database using an image as a query. The method is specifically aimed at databases that store images in JPEG format, and works in the compressed domain to create index keys. A key is generated for each image in the database and is matched with the key generated for the query image. The keys are independent of the size of the image. Images that have similar keys are assumed to be similar, but there is no semantic meaning to the similarity.

152 citations


Proceedings ArticleDOI
16 Sep 1996
TL;DR: An image authentication technique by embedding each image with a signature so as to discourage unauthorized copying is proposed, which could actually survive several kinds of image processing and the JPEG lossy compression.
Abstract: An image authentication technique by embedding each image with a signature so as to discourage unauthorized copying is proposed. The proposed technique could actually survive several kinds of image processing and the JPEG lossy compression.

143 citations


Proceedings ArticleDOI
14 Nov 1996
TL;DR: The current status of the FBI standard for digitization and compression of gray-scale fingerprint images is reviewed, including the compliance testing process and the details of the first-generation encoder.
Abstract: The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

97 citations


Patent
29 Mar 1996
TL;DR: In this article, a hybrid convolutional neural network (HNN) and a self-organizing map neural network are used for object recognition. But they do not provide invariance to translation, rotation, scale, and deformation.
Abstract: A hybrid neural network system for object recognition exhibiting local image sampling, a self-organizing map neural network, and a hybrid convolutional neural network. The self-organizing map provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the hybrid convolutional neural network provides for partial invariance to translation, rotation, scale, and deformation. The hybrid convolutional network extracts successively larger features in a hierarchical set of layers. Alternative embodiments using the Karhunen-Loeve transform in place of the self-organizing map, and a multi-layer perceptron in place of the convolutional network are described.

90 citations


Proceedings ArticleDOI
16 Sep 1996
TL;DR: This work compute the perceptual error for each block based upon the DCT quantization error adjusted according to the contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image.
Abstract: An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8/spl times/8 block, which multiplies the quantization matrix, yielding the new matrix for that block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon the DCT quantization error adjusted according to the contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bit rate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

82 citations


Patent
21 Jun 1996
TL;DR: In this paper, a method of assigning quantization values for use during the compression of images is disclosed, which includes constructing a non-parametric model during a training phase based on relationships between image characteristics, quantization value, and required bit resources to encode images.
Abstract: A method of assigning quantization values for use during the compression of images is disclosed. The method includes constructing a non-parametric model during a training phase based on relationships between image characteristics, quantization values, and required bit resources to encode images. The model is built by considering a wide sample of images. The consideration includes determining the temporal and spatial characteristics of the images, compressing the images over a range of quantization values, and recording the resultant required bit resource on a per characterization/quantization level basis. Once built, the model may be used during real time compression by determining the characteristics of the input image and using the allocated resource to find a corresponding match in the non-parametric model. The associated quantization value is then assigned to the input image.

73 citations


Patent
Joseph Kairouz1
13 Aug 1996
TL;DR: In this article, an apparatus and method for reducing the dynamic range of input numbers representing a physical quantity by producing a sample set of sample numbers represents a representative sample of the physical quantity and for each sample number, determining a corresponding nearest mapping weight from a plurality of mapping weights of a mapping function.
Abstract: An apparatus and method for reducing the dynamic range of input numbers representing a physical quantity by producing a sample set of sample numbers representing a representative sample of the physical quantity and for each sample number, determining a corresponding nearest mapping weight from a plurality of mapping weights of a mapping function. The corresponding nearest mapping weight is increased or decreased in proportion to the arithmetic difference between the corresponding nearest mapping weight and each sample number in the sample set. Then, each successive input number, is applied to the mapping function to produce a corresponding output number. There is also disclosed a processor readable medium on which is stored a set of instructions operable to direct a processor to perform the method.

69 citations


Journal ArticleDOI
TL;DR: A fast lossy 3-D data compression scheme using vector quantization (VQ) is presented that exploits the spatial and the spectral redundancy in hyperspectral imagery.
Abstract: A fast lossy 3-D data compression scheme using vector quantization (VQ) is presented that exploits the spatial and the spectral redundancy in hyperspectral imagery. Hyperspectral imagery may be viewed as a 3-D array of samples in which two dimensions correspond to spatial position and the third to wavelength. Unlike traditional 2-D VQ, where spatial blocks of n3m pixels are taken as vectors, we define one spectrum, corresponding to a profile taken along the wavelength axis, as a vector. This constitution of vectors makes good use of the high corre- lation in the spectral domain and achieves a high compression ratio. It also leads to fast codebook generation and fast codevector matching. A coding scheme for fast vector matching called spectral-feature-based binary coding (SFBBC) is used to encode each spectral vector into a simple and efficient set of binary codes. The generation of the codebook and the matching of codevectors are performed by matching the binary codes produced by the SFBBC. The experiments were carried out using a test hyperspectral data cube from the Compact Airborne Spectro- graphic Imager. Generating a codebook is 39 times faster with the SF- BBC than with conventional VQ, and the data compression is 30 to 40 times faster. Compression ratios greater than 192 : 1 have been achieved with peak signal-to-noise ratios of the reconstructed hyper- spectral sequences exceeding 45.2 dB. © 1996 Society of Photo-Optical Instru- mentation Engineers.

Patent
17 Jun 1996
TL;DR: In this article, a memory efficient system of storing color correction information for liquid crystal tuning filters when used with electronic imaging cameras to produce color images is presented, which is stored with maximum possible gain to optimize accuracy prior to compression.
Abstract: A memory efficient system of storing color correction information for liquid crystal tuning filters when used with electronic imaging cameras to produce color images, which color correction information is stored with maximum possible gain to optimize accuracy preparatory to compression. The system bins the color correction image, for example, from a 4K×4K CCD sensor into a 500×500 or 1K×1K file, and then applies the JPEG and/or wavelet compression algorithm with a default configuration and/or a custom quantization table that emphasizes low frequency changes with more bits than high frequency changes with less bits. At the end of the compression, the compressed R, G, B files and an n-point correction executable algorithm are stored on floppy disk or CD ROM and are used to automatically take control of image enhancement when invoked by the photographer.

Patent
07 Mar 1996
TL;DR: In this paper, a quantization table with a "supra-threshold" term is used to select image elements that correspond to important image elements and a smaller weight to less important image items.
Abstract: A method of compressing color source image data includes forming a quantization table with a "supra-threshold" term. This method includes a step of selecting a set of target images, where each target image includes one or more image elements such as text. These image elements are then analyzed to identify those that are more important for visual quality. These "supra-threshold" terms are then selected that gives a larger weight to the quantization table elements that correspond to important image elements and a smaller weight to the table elements that correspond to less important image elements. This process selectively weights the characteristics of each DCT basis vectors. By giving larger weights to the table elements that correspond to the "up-downness" of the image, i.e., the vertical attributes of the image elements, and the "left-rightness" of the image, i.e., the horizontal attributes of the image elements, and smaller weights to the table elements corresponding to the "criss-crossedness" of the image, i.e., the diagonal attributes of the image elements, the visual quality of an image that includes text can be preserved while significantly increasing the compression ratio.

Patent
Byeungwoo Jeon1, Jechang Jeong1
19 Jan 1996
TL;DR: In this paper, a post-processing device for eliminating a blocking artifact generated upon reconstructing an image compressed by block transform operation and a method thereof minimize a blocking artifacts at block boundaries by selecting a predetermined discrete cosine transform (DCT), estimating transform coefficients with respect to the information lost upon quantization or inverse quantization, performing an inverse transform operation on the estimated transform coefficients and adding the thus-obtained adjustment value to an inverse-transform-operated reconstructed image signal.
Abstract: A post-processing device for eliminating a blocking artifact generated upon reconstructing an image compressed by block transform operation and a method thereof minimize a blocking artifact at block boundaries by selecting a predetermined discrete cosine transform (DCT), estimating transform coefficients with respect to the information lost upon quantization or inverse quantization to have the highest continuity with respect to adjacent blocks, performing an inverse transform operation on the estimated transform coefficients, and adding the thus-obtained adjustment value to an inverse-transform-operated reconstructed image signal.

Proceedings ArticleDOI
TL;DR: An analysis of a broad suite of images confirms previous findings that a Laplacian distribution can be used to model the luminance ac coefficients and the distribution model is applied to improve dynamic generation of quantization matrices.
Abstract: Many image and video compression schemes perform the discrete cosine transform (DCT) to represent image data in frequency space. An analysis of a broad suite of images confirms previous findings that a Laplacian distribution can be used to model the luminance ac coefficients. This model is expanded and applied to color space (Cr/Cb) coefficients. In MPEG, the DCT is used to code interframe prediction error terms. The distribution of these coefficients is explored. Finally, the distribution model is applied to improve dynamic generation of quantization matrices.

Patent
19 Dec 1996
TL;DR: In this article, a process for decoding MPEG encoded image data stored in a system memory utilizing a configurable image decoding apparatus is described, which comprises the steps of extracting macroblock information from MPEG-encoded image data, the macroblocks containing image data and motion compensation data.
Abstract: A process for decoding MPEG encoded image data stored in a system memory utilizing a configurable image decoding apparatus. The process comprises the steps of: (a) extracting macroblock information from said MPEG encoded image data, the macroblocks containing image data and motion compensation data; (b) extracting a series of parameters from the MPEG encoded image data for decoding the MPEG encoded data; (c) determining quantization factors from the encoded image data; (d) configuring the configurable image decoding apparatus, including (i) configuring a means for parsing the macroblock data into motion vectors and image data with the series of parameters with the parameters for decoding the encoded data; (ii) configuring a means for performing inverse quantization with the quantization co-efficients; (e) determining a decoding order of the extracted macroblock information to be decoded; (f) providing said extracted macroblock information to the parsing means in the decoding order; (g) combining decoded image data with motion vectors extracted by the parsing means; and (h) storing the combined data in the system memory.

Patent
Ricardo L. de Queiroz1
26 Sep 1996
TL;DR: In this paper, a method and apparatus for the processing of images that have been compressed using a discrete cosine transform operation, and particularly JPEG compressed images is presented. But it is not suitable for the use of large memory buffers.
Abstract: The present invention is a method and apparatus for the processing of images that have been compressed using a discrete cosine transform operation, and particularly JPEG compressed images. In a preferred embodiment, the rotation of image blocks is accomplished by sign inversion and transposition operations to accomplish intrablock operations. Subsequently, one of a number of alternative methods is employed to accomplish the same image processing on an interblock level, thereby enabling the rotation or mirroring of compressed images. The two stage process allows the use of either a standardized JPEG system with enhancements or a hybrid processing method, thereby accomplishing the image processing in conjunction with compression or decompression operations and minimizing the need for large memory buffers to accomplish the image processing. Accordingly, the technique has application to any number of systems, including digital printers and copiers where there may be a necessity to orthogonally rotate or mirror the digital image.

Patent
11 Oct 1996
TL;DR: In this article, an image signal is dithered by the addition of a small noise signal from a noise generator, which breaks up the edges of homogenous blocks of pixels, causing the created image to appear to have a smooth transition from one region to the next.
Abstract: A method and system for reducing the effects of false contouring and reducing color shading artifacts. An image signal 102 is dithered by the addition of a small noise signal from a noise generator 500. The added noise signal breaks up the edges of homogenous blocks of pixels, causing the created image to appear to have a smooth transition from one region to the next. The image dithering is especially useful in digital color image displays where processing performed on the chrominance portion of the image signal often causes quantization errors which lead to sharp transitions between similar shades when the input image included a smooth transition.

Patent
Shi-hwa Lee1
23 Sep 1996
TL;DR: In this article, the authors proposed a method of video coding associated with processing accumulated errors and a encoder therefor, the method comprising the steps of: (a) generating motion vectors of an input image in a predetermined unit and the difference image between an image of filtering a motion-compensated image on a reconstructed previous frame and the input image on current frame, and then performing discrete cosine transform (DCT), quantization and variable length coding on the difference images; (b) generating the motion-computed image on the reconstructed previous frames from the reconstructed last frame
Abstract: The present invention relating to a method of video coding associated with processing accumulated errors and a encoder therefor, the method comprising the steps of: (a) generating motion vectors of an input image in a predetermined unit and the difference image between an image of filtering a motion-compensated image on a reconstructed previous frame and the input image on current frame, and then performing discrete cosine transform (DCT), quantization and variable length coding on the difference image; (b) generating the motion-compensated image on the reconstructed previous frame from the reconstructed previous frame and the motion vectors; and (c) filtering off accumulated errors while preserving the edges within the motion-compensated image on the reconstructed previous frame. Therefore, random distributed noises due to accumulated errors can be removed and bit generation amounts by filtering off random accumulated errors with a high frequency characteristics before coding can be reduced.

Journal ArticleDOI
TL;DR: A genetic quantization algorithm is developed, which is a hybrid technique combining optimal quantization with a genetic algorithm and it is shown that the latter technique is almost insensitive to initial conditions and performs better than the former.

Proceedings ArticleDOI
07 May 1996
TL;DR: A locally adaptive perceptual masking threshold model that computes, based on the contents of the original images, the maximum amount of noise energy that can be injected at each transform coefficient that results in perceptually distortion-free still images or sequences of images.
Abstract: This paper involves designing, implementing, and testing of a locally adaptive perceptual masking threshold model for image compression. This model computes, based on the contents of the original images, the maximum amount of noise energy that can be injected at each transform coefficient that results in perceptually distortion-free still images or sequences of images. The adaptive perceptual masking threshold model can be used as a pre-processor to a JPEG compression standard image coder. DCT coefficients less than their corresponding perceptual thresholds can be set to zero before the normal JPEG quantization and Huffman coding steps. The result is an image-dependent gain in the bit rate needed for transparent coding. In an informal subjective test involving 318 still images in the AT&T Bell Laboratory image database, this model provided a gain in bit-rate saving on the order of 10 to 30%.

Proceedings ArticleDOI
16 Sep 1996
TL;DR: The average Euclidean distance (AED) is defined as a new measure for comparing the performance of color image palette construction algorithms and the effectiveness of the proposed ordered palette construction algorithm in the case of the bit error is shown.
Abstract: In transmitting color images represented by the palette, the indices corrupted by the bit error cause serious image quality degradation in the reconstructed images at a receiver. This paper proposes an ordered palette construction algorithm based on the hue-saturation-intensity (HSI) color system for minimizing the reconstruction error. We define the average Euclidean distance (AED) as a new measure for comparing the performance of color image palette construction algorithms and show the effectiveness of the proposed method in the case of the bit error. Experimental results show that the proposed algorithm effectively reduces the number of mismatched colors caused by the corrupted indices and thus the reconstruction error.

Patent
Miyane Toshiki1, Sekimoto Uichi1
30 Jan 1996
TL;DR: In this article, the inverse quantization table generator 250 is used to generate quantization tables from the compressed image data ZZ, where the quantization level coefficient QF(u,v) is inserted between block data units.
Abstract: The compressed image data ZZ includes code data representing a quantization level coefficient QCx inserted between block data units. DCT coefficients QF(u,v) and a quantization level coefficient QCx, which are decoded form the compressed image data ZZ, are multiplied in the inverse quantization table generator 250 to generate a quantization table QT, and the inverse quantization unit 250 executes inverse quantization with the quantization table QT. Since the quantization level coefficient QCx is inserted between block data units in the compressed image data, the quantization table QT is renewed every time when a new quantization level coefficient QCx is decoded. The compressed image data also includes a special type of data, or null run data, representing a series of pixel blocks having an identical image pattern.

Proceedings ArticleDOI
25 Aug 1996
TL;DR: An application of the independent component analysis for hidden (secured) image transmission by communication channels is proposed and constraints of the mixing process are discussed that make impossible the hidden image separation without the key images.
Abstract: It is known that the independent component analysis (ICA) (also called blind source separation) can be applied only if the number of received signals (sensors) is at least equal to the number of mixed sources, contained in the sensor signals. In this paper an application of the ICA is proposed for hidden (secured) image transmission by communication channels. We assume that only a single image mixture is transmitted. A friendly receiver contains the remaining original sources and therefore it can separate the hidden image of lowest energy. The influence of two nonlossless signal reduction stages, compression by principal component analysis and signal quantization, onto the separation ability is tested. Constraints of the mixing process are discussed that make impossible the hidden image separation without the key images.

Patent
24 Apr 1996
TL;DR: In this article, the least mean square error of the blocking error is used to generate the correction terms which are subsequently combined with the original image data to generate an image exhibiting reduced blocking effects when displayed.
Abstract: An apparatus and method for post processing image data which previously was encoded using a discrete cosine transform in order to remove resulting blocking effects. Correction coefficients are generated by determining the least mean square error of the blocking error. The correction coefficients are adjusted to be within the quantization range of the quantization step size used during the coding process. The adjusted correction coefficients are then used to generate the correction terms which are subsequently combined with the original image data to generate an image exhibiting reduced blocking effects when the image is displayed.

Proceedings ArticleDOI
16 Sep 1996
TL;DR: A layered DCT image compression scheme is proposed, which generates an embedded bit stream for DCT coefficients according to their importance, and provides a substantial rate-distortion improvement over the JPEG standard when the bit rates become low.
Abstract: Motivated by Shapiro's (see IEEE Trans. on Signal Processing, vol.41, no.12, p. 3445-62, 1993) embedded zerotree wavelet coding (EZW) and Taubman and Zakhor's (see IEEE Trans. on Image Processing, vol.3, no.3, p.572-88, 1994) layered zero coding (LZC), we propose a layered DCT image compression scheme, which generates an embedded bit stream for DCT coefficients according to their importance. The new method allows progressive image transmission and simplifies the rate-control problem. In addition to these functionalities, it provides a substantial rate-distortion improvement over the JPEG standard when the bit rates become low. For example, we observe a bit rate reduction with respect to the JPEG Huffman and arithmetic coders by about 60% and 20%, respectively, for a bit rate around 0.1 bpp.

Patent
03 Apr 1996
TL;DR: In this article, the authors proposed a method of encoding a picture in an MPEG2 compliant digital video encoder, which calculates a contrast function, Contrast=Σ|P(j)-P(p(j+1)| and thereafter calculates a quantization adjustment function therefrom M(i+1)= C(i + 1)/C(i)!M(i), where C=Contrast, P(j) is the luminance or chrominance of the jth pixel, and M(m)is the average quantization of the ith picture.
Abstract: A method of encoding a picture in an MPEG2 compliant digital video encoder. The method calculates a contrast function, Contrast=Σ|P(j)-P(j+1)| and thereafter calculates a quantization adjustment function therefrom M(i+1)= C(i+1)/C(i)!M(i), where C=Contrast, P(j) is the luminance or chrominance of the jth pixel, and M(i)is the average quantization of the ith picture. The quantization or picture type is adjusted in response to the contrast function, C.

Proceedings ArticleDOI
27 Feb 1996
TL;DR: In this article, the quantization step sizes are adapted to the activity level of the block, and the activity selection is based on an edge-driven quadtree decomposition of the image.
Abstract: Digital image compression algorithms have become increasingly popular due to the need to achieve cost-effective solutions in transmitting and storing images. In order to meet various transmission and storage requirements, the compression algorithm should allow a range of compression ratios, thus providing images of different visual quality. This paper presents a modified JPEG algorithm that provides better visual quality than the Q-factor scaling method commonly used with JPEG implementations. The quantization step sizes are adapted to the activity level of the block, and the activity selection is based on an edge-driven quadtree decomposition of the image. This technique achieves higher visual quality than standard JPEG compression at the same bit rate.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
J.L.H. Webb1
16 Sep 1996
TL;DR: It is demonstrated that chrominance variance can be used to efficiently identify blocks containing luminance detail in DCT-based video compression and is effective in reducing severe blocking effects that can occur at low bit rates.
Abstract: We investigate a new postprocessing approach to reduce blocking artifacts in DCT-based video compression. Because blocking artifacts occur due to quantization of DCT coefficients, we effectively adjust the low-order DCT coefficients to remove discontinuities at block corners in smooth areas of the image. For blocking in detailed areas of the image, we camouflage black corners by adding a noisy dither to pixels around the block border. We demonstrate that chrominance variance can be used to efficiently identify blocks containing luminance detail. This postprocessing method is effective in reducing severe blocking effects that can occur at low bit rates. Examples from postprocessed H.261 video are given.

Patent
Seok-Yoon Jung1
24 Apr 1996
TL;DR: A blocking effect eliminating circuit in an image coder/decoder employs a weighted value according to the location of each pixel and the quantization step size in a block constituting an input image, to reduce the blocking effect generated in a received image and enhance image quality as discussed by the authors.
Abstract: A blocking-effect eliminating circuit in an image coder/decoder employs a weighted value according to the location of each pixel and the quantization step size in a block constituting an input image, to thereby reduce the blocking effect generated in a received image and enhance image quality.