scispace - formally typeset
Search or ask a question
Proceedings Article•DOI•

A hybrid fractal-DCT coding scheme for image compression

16 Sep 1996-Vol. 1, pp 169-172
TL;DR: A new way to use fractal coding for image compression is introduced, based on the parallel use of a fractal encoder and a DCT encoder, suitable for real-time VLSI implementation.
Abstract: We introduce a new way to use fractal coding for image compression, based on the parallel use of a fractal encoder and a DCT encoder. The two encoders are given the complementary roles to capture the information of edge and smooth variation, and the information of detail respectively. We show the advantage of using this hybrid coding scheme over the use of a fractal encoder alone, or a DCT encoder alone. This coding scheme is also the occasion to demonstrate a new concept of coding by nonlinear feature separation based on regular and uniform algorithms, suitable for real-time VLSI implementation.
Citations
More filters
Journal Article•DOI•
TL;DR: This review represents a survey of the most significant advances, both practical and theoretical, since the publication of Jacquin's original fractal coding scheme.
Abstract: Fractal image compression is a technique based on the representation of an image by a contractive transform, on the space of images, for which the fixed point is close to the original image. This broad principle encompasses a very wide variety of coding schemes, many of which have been explored in the rapidly growing body of published research. While certain theoretical aspects of this representation are well established, relatively little attention has been given to the construction of a coherent underlying image model that would justify its use. Most purely fractal-based schemes are not competitive with the current state of the art, but hybrid schemes incorporating fractal compression and alternative techniques have achieved considerably greater success. This review represents a survey of the most significant advances, both practical and theoretical, since the publication of Jacquin's (1990) original fractal coding scheme.

308 citations


Cites background or methods from "A hybrid fractal-DCT coding scheme ..."

  • ...The simplest possible range partition consists of the fixed size square blocks [17] [18] [19] depicted in Figure 2a....

    [...]

  • ...Alternative hybrids between fractal and transform coding have been constructed by DCT coding of the error image resulting from fractal coding [19] [60]....

    [...]

Proceedings Article•
31 Jul 1997
TL;DR: This paper has chosen the similarity to a particular variant of vector quantization as the most direct approach to fractal image compression and surveys some of the advanced concepts such as fast decoding, hybrid methods, and adaptive partitionings.
Abstract: Fractal image compression is a new technique for encoding images compactly. It builds on local self-similarities within images. Image blocks are seen as rescaled and intensity transformed approximate copies of blocks found elsewhere in the image. This yields a self-referential description of image data, which --- when decoded --- shows a typical fractal structure. This paper provides an elementary introduction to this compression technique. We have chosen the similarity to a particular variant of vector quantization as the most direct approach to fractal image compression. We discuss the hierarchical quadtree scheme and vital complexity reduction methods. Furthermore, we survey some of the advanced concepts such as fast decoding, hybrid methods, and adaptive partitionings. We conclude with a list of relevant WEB resources including complete public domain C implementations of the method and a comprehensive list of up-to-date references.

66 citations

Book Chapter•DOI•
13 Sep 1995
TL;DR: Why effective non-linear transformations are not easy to find and a model based on conformai mappings in the geometric domain that are a natural extension of the affine model are proposed.
Abstract: Most recent advances in fractal image coding have been concentrating on better adaptive coding algorithms, on extending the variety of the blocks and on search strategies to reduce the encoding time. Very little has been done to challenge the linear model of the fractal transformations used so far in practical applications. In this paper we explain why effective non-linear transformations are not easy to find and propose a model based on conformai mappings in the geometric domain that are a natural extension of the affine model. Our compression results show improvements over the linear model and support the hope that a deeper understanding of the notion of self-similarity would further advance fractal image coding.

23 citations

Journal Article•DOI•
TL;DR: This work introduces and analyse algorithms for fractal image compression on massively parallel SIMD arrays and compares the performance of the algorithms on the 2-D mesh array of the MasPar MP-2.
Abstract: In this work we introduce and analyse algorithms for fractal image compression on massively parallel SIMD arrays. The different algorithms discussed differ significantly in terms of their communication and computation structure. Therefore, the most suited algorithm for a given architecture may be selected according to our investigations. Experimental results compare the performance of the algorithms on the 2-D mesh array of the MasPar MP-2.

17 citations

Journal Article•DOI•
TL;DR: A segmentation based lossy image compression (SLIC) algorithm is presented that encodes a graylevel image through global approximations of sub- images by 2-d Bezier–Bernstein polynomial along with corrections, if needed, over regions in sub-images by local approximation.

14 citations

References
More filters
Book•
12 Oct 2011
TL;DR: Working C code for a fractal encoding/decoding scheme capable of encoding images in a few seconds, decoding at arbitrary resolution, and achieving high compression rations is proposed.
Abstract: From the contents: Recent theoretical results on fast encoding and decoding methods, various schemes for encoding images using fractal methods, and theoretical models for the encoding/decoding process.- Working C code for a fractal encoding/decoding scheme capable of encoding images in a few seconds, decoding at arbitrary resolution, and achieving high compression rations.- Experimental results from various schemes showing their capability and forming the basis for a sophisticated implementation.- A list of previously unresearched projects containing both new ideas and inhancements to the schemes discussed in the book.- A comparison of the fractal schemes in the book with JPEG, commercial fractal software, and wavelet methods.

1,098 citations

Journal Article•DOI•
TL;DR: An iterative block reduction technique based on the theory of a projection onto convex sets to impose a number of constraints on the coded image in such a way to restore it to its original artifact-free form.
Abstract: The authors propose an iterative block reduction technique based on the theory of a projection onto convex sets. The idea is to impose a number of constraints on the coded image in such a way as to restore it to its original artifact-free form. One such constraint can be derived by exploiting the fact that the transform-coded image suffering from blocking effects contains high-frequency vertical and horizontal artifacts corresponding to vertical and horizontal discontinuities across boundaries of neighboring blocks. Another constraint has to be with the quantization intervals of the transform coefficients. Specifically, the decision levels associated with transform coefficient quantizers can be used as lower and upper bounds on transform coefficients, which in turn define boundaries of the convex set for projection. A few examples of the proposed approach are presented. >

544 citations

Proceedings Article•DOI•
01 Jun 1991
TL;DR: A new iterative block reduction technique based on the theory of projection onto convex sets to restore the coded image in such a way as to restore it to its original artifact-free form.
Abstract: We propose a new iterative block reduction technique based on the theory of projection onto convex sets. The basic idea behind this technique is to impose a number of constraints on the coded image in such a way as to restore it to its original artifact-free form. One such constraint can be derived by exploiting the fact that the transform coded image suffering from blocking effects contains high frequency vertical and horizontal artifacts corresponding to vertical and horizontal discontinuities across boundaries of neighboring blocks. Since these components are missing in the original uncoded image, or at least can be guaranteed to be missing from the original image prior to coding, one step of our iterative procedure consists of projecting the coded image onto the set of signals which are bandlimited in the horizontal or vertical directions. Another constraint we have chosen in the restoration process has to do with the quantization intervals of the transform coefficients. Specifically, the decision levels associated with transform coefficient quantizers can be used as lower and upper bounds on transform coefficients, which in turn define boundaries of the convex set for projection. Thus, in projecting the 'out of bound' transform coefficient onto this convex set, we will choose the upper (lower) bound of the quantization interval if its value is greater (less) than the upper (lower) bound. We present a few examples of our proposed approach.

225 citations

Proceedings Article•DOI•
13 Nov 1994
TL;DR: The concept of set theoretic compression where the input signal is implicitly encoded by the specification of a set which contains it, rather than an estimate which approximates it, is discussed.
Abstract: We discuss the concept of set theoretic compression where the input signal is implicitly encoded by the specification of a set which contains it, rather than an estimate which approximates it. This approach assumes the reconstruction of an estimate from the encoded set information only at the decoder side. We explain the motivations of this approach for high signal compression and encoding simplification, and the implication of more complex decoding. We then present the tools to support the approach. We finally show a demonstration of this approach in a particular application of image coding. >

32 citations

Journal Article•DOI•
TL;DR: A set of tools are developed to design a new class of encoders for image compression, based on a set decomposition and recombination of the image features, to modify the encoding process of block discrete cosine transform (DCT) coding.
Abstract: We show that the complete information that is available after an image has been encoded is not just an approximate quantized image version, but a whole set of consistent images that contains the original image by necessity. From this starting point, we develop a set of tools to design a new class of encoders for image compression, based on a set decomposition and recombination of the image features. As an initial validation, we show the results of an experiment where these tools are used to modify the encoding process of block discrete cosine transform (DCT) coding in order to yield less blocking artifacts.

8 citations