scispace - formally typeset
Search or ask a question

Showing papers by "Hannes Hartenstein published in 1997"


Proceedings Article
31 Jul 1997
TL;DR: This paper has chosen the similarity to a particular variant of vector quantization as the most direct approach to fractal image compression and surveys some of the advanced concepts such as fast decoding, hybrid methods, and adaptive partitionings.
Abstract: Fractal image compression is a new technique for encoding images compactly. It builds on local self-similarities within images. Image blocks are seen as rescaled and intensity transformed approximate copies of blocks found elsewhere in the image. This yields a self-referential description of image data, which --- when decoded --- shows a typical fractal structure. This paper provides an elementary introduction to this compression technique. We have chosen the similarity to a particular variant of vector quantization as the most direct approach to fractal image compression. We discuss the hierarchical quadtree scheme and vital complexity reduction methods. Furthermore, we survey some of the advanced concepts such as fast decoding, hybrid methods, and adaptive partitionings. We conclude with a list of relevant WEB resources including complete public domain C implementations of the method and a comprehensive list of up-to-date references.

66 citations


Proceedings ArticleDOI
25 Mar 1997
TL;DR: It is demonstrated by a reduction from MAXCUT that the problem of determining the optimal fractal code is NP-hard, the first analysis of the intrinsic complexity of fractal coding, and it is shown that standard fractal coded is not an approximating algorithm for this problem.
Abstract: In fractal compression a signal is encoded by the parameters of a contractive transformation whose fixed point (attractor) is an approximation of the original data. Thus fractal coding can be viewed as the optimization problem of finding in a set of admissible contractive transformations the transformation whose attractor is closest to a given signal. The standard fractal coding scheme based on the collage theorem produces only a suboptimal solution. We demonstrate by a reduction from MAXCUT that the problem of determining the optimal fractal code is NP-hard. To our knowledge, this is the first analysis of the intrinsic complexity of fractal coding. Additionally, we show that standard fractal coding is not an approximating algorithm for this problem.

46 citations


Proceedings ArticleDOI
26 Oct 1997
TL;DR: This paper shows how conventional acceleration techniques and a deterministic version of the evolution reduce the time-complexity of the method without degrading the encoding quality and reports on techniques to improve the rate-distortion performance.
Abstract: In fractal image compression a partitioning of the image into ranges is required. Saupe and Ruhl (1996) proposed to find good partitionings by means of a split-and-merge process guided by evolutionary computing. In this approach ranges are connected sets of small square image blocks. Far better rate-distortion curves can be obtained as compared to traditional quadtree partitionings, however, at the expense of an increase of computing time. In this paper we show how conventional acceleration techniques and a deterministic version of the evolution reduce the time-complexity of the method without degrading the encoding quality. Furthermore, we report on techniques to improve the rate-distortion performance and evaluate the results visually.

38 citations


Proceedings Article
01 Jan 1997
TL;DR: A theoretical framework for -distortion limited compression that covers several recently proposed methods is presented and a comparison of coding results for the Lenna test image, a coronary angiogram, and a Landsat image is given.
Abstract: In -distortion limited compression each single pixel value is only changed by maximal grey values. In this paper we present a theoretical framework for -distortion limited compression that covers several recently proposed methods. The basics of each of these methods are described. We give a comparison of coding results for the Lenna test image, a coronary angiogram, and a Landsat image. Results are reported for various tolerances. Standard DPCM is used as a reference. While this paper gives an overview over various algorithms, the main purpose is to indicate what level of compression can be expected when limiting the error in -distortion sense. 1. A -DISTORTION LIMITED COMPRESSION FRAMEWORK In many applications, for example medical imagery, SAR imagery, or numerical weather simulations, the large amount of data to be stored or transmitted asks for data compression. Since lossless coding usually gives a compression ratio of at most 4:1, lossy coding methods have to be employed when higher compression ratios are needed. Most lossy compression schemes operate by minimizing some average error measure such as the root mean square error. However, in error critical applications such as medical imagery or target recognition, such average error measures are inappropriate. Instead, there is usually a need for a guarantee that a single pixel has not been changed by more than a certain tolerance (which may depend on the pixel location). Thus, the error in each pixel has to be controlled. In this paper we consider a -distortion limited compression scheme with global tolerance . For such an encoding method the code for a one-dimensional signal represents a reconstruction signal , where "! # %$ & !(') & +* -,/. & !0') & 1 24365 798;:= > @? $ ! 7 'A 7 $CB For D FE this leads to lossless compression. If is small, the term 'near-lossless coding' appears to be 0 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 9 10 Figure 1: Each left-to-right path is an element of GIH;J9KML with K/NOJQPSRTP6RVU6R=W"RXPSRXYZRV[6RX\SRTY6RT]ZL . justified. can be seen as the set of all left-toright paths in a trellis as depicted in Figure 1. Which of the _^ ` a X elements of b can be coded most efficiently? All coding methods described below use a lossless coding strategy c and try to determine, at least approximately or heuristically, the element of b that can be coded most efficiently using c . Mathematically, let d be the set of lossless coding methods. For example, ce fd could be a 0-order entropy coder. Then the coding problem for this particular c is: find !Vgh such that i c !Vg X j 24kml n 8"oqp6rmsut i c !X X wv where i Tx gives the length of the code or some estimate thereof. In the following sections we give short descriptions of several -based compression methods. Most of these methods were implemented and tested with the images given in Figure 2. 2. QUANTIZATION VS PRECONDITIONING In the problem formulation of the previous section the signal to be coded can be modified in each component independently of the other components. Thus, for a signal and y{z }| it is possible that yT ~  |€ but yT z  |; . In other words, is in general not simply the result of using a quantizer for the values of . We refer to as a preconditioned version of in contrast to a quantized version. The emphasis in this paper is on preconditioning. Nevertheless, the problem of finding the quantization function such that the quantized version of has minimal 0-order entropy can be solved in time using dynamic programming [1]. It is also shown that for a tolerance E the entropy savings are at most _^ ` a per pixel. 3. ENTROPY-CODED DPCM BASED METHODS The entropy coding of the prediction residuals of a DPCM scheme is a standard method for lossless compression. It can easily be modified to serve as an distortion based compression method. DPCM1. The signal is uniformly quantized with quantization bin size ^ `#a . Thus, a quantized version of the original signal is computed. Then the residuals of a linear predictor are entropy coded. The disadvantage of this method is that for larger there are only a few different grey levels leading to 'plateau effects'. DPCM2. No a priori grey value reduction is performed, but the prediction error of the DPCM scheme is uniformly quantized to match the desired tolerance . When the predictor coefficients are not integer values, this method does not coincide with the method DPCM1 and does not show the plateau effects. Results for several medical images are reported in [2]. In the above mentioned methods, there is actually no mechanism to minimize the entropy of the error sequence. When we use a lossless predictive coder followed by an entropy coder, the optimization problem wv asks for the path in the trellis whose corresponding residual sequence has minimum entropy. We conjecture that this optimization problem is NP-hard. Note that the complexity depends on the signal length and the tolerance . We applied genetic algorithms (GA) [3, 4] to solve this optimization problem for a signal and a tolerance . GA. In our setting a chromosome is a word of length over the alphabet @' . x"x x . E . x x x .X -, and represents the signal ` . The genetic operations are 2-point crossover and mutation. We use roulette wheel parent selection. The evaluation of a chromosome is given by the entropy of the distribution of prediction residuals of { 1 ` . For the fitness function we use exponential ranking. Large tests for the determination of suitable parameters were performed. The results obtained with the GA approach are rather disappointing. For example, as a signal a line of the image Lenna was taken. The entropy of that signal is 7.0, after quantization with tolerance ^ , and after prediction the sequence can be coded with an entropy of 3.1. The solution found with the GA only gave an entropy of 3.9. Thus, the GA is not even able to beat the method DPCM1. The minimum-entropy constrained-error DPCM (MECE) of [5] is another method that tries to minimizes the entropy of the prediction residual sequence. It uses an iterative optimization method that arrives at a local optimum. MECE. Assume that an ideal entropy coder is given for a fixed residual distribution. To find the optimal element of for this coder one has to solve a shortest path problem. This can easily be done via Dynamic Programming. Now, using an entropy coder that is optimal for the actual residual distribution will give a decrease in entropy. These two steps are performed iteratively until a stopping criterion is matched. For images a two dimensional 3-tap predictor is used and the images are coded row by row. The results can be further improved by using a 1-order entropy coder with a certain number of contexts. In the above mentioned methods the predictor and the contexts are fixed. Of course, it would be advantageous to include the choice of predictor coefficients and context into the optimization problem; clearly, this makes the problem even more complicated. A sophisticated method that uses adaptive context modeling to correct prediction biases is the -constrained CALIC [6]. The converse problem of determining a predictor such that the prediction residuals have minimum entropy was investigated for lossless coding in [7]. 4. PIECEWISE LINEAR CODING Piecewise linear coding (PCL) is a generalization of Run Length Encoding. It is also called fan-based coding; for an extensive overview see [8]. In piecewise linear coding a signal is split into segments each of which can be described by a linear function. Each segment then is coded by the length of the segment and the slope parameter. The constant additive part of the function is implicitly given by the previous segment; only for the first segment the initial signal value has to be coded. For example, the signal in Figure 1 is represented as 3(1,0)(2,-1)(3,2)(3,-1). In the case that i Tx counts the number of segments the optimization wv can be solved in -time via Dynamic Programming [9]. In [10, 11] a suboptimal greedy method that works in linear time is proposed for the same optimization problem. Essentially, it works as follows. The image is transformed into a 1-dimensional signal, e.g., by a Hilbert-Peano scan. Then the linear segments are successively determined: starting at the endpoint of the last determined segment, the new segment is chosen to be the one of greatest possible length. Finally, an 0-order entropy coder is applied to the list of segment lengths and segment slopes. Better results can be obtained when the length of the 0-order entropy code is minimized in place of Figure 2: The a ^ a ^ 8-bit test images Lenna, Angio, Landsat.

10 citations


Proceedings ArticleDOI
21 Apr 1997
TL;DR: The analysis of the quadratic error functions also provides guidance for optimal scalar quantization in an optimal codebook design algorithm leading to a non-standard VQ-scheme.
Abstract: This paper is concerned with the efficient storage of the luminance parameters in a fractal code by means of vector quantization (VQ). For a given image block (range) the collage error as a function of the luminance parameters is a quadratic function with ellipsoid contour lines. We demonstrate how these functions should be used in an optimal codebook design algorithm leading to a non-standard VQ-scheme. In addition we present results and an evaluation of this approach. The analysis of the quadratic error functions also provides guidance for optimal scalar quantization.

7 citations


01 Jan 1997
TL;DR: It is demonstrated by a reduction from MAXCUT that the problem of determining the optimal fractal code is NP-hard, the first analysis of the intrinsic complexity of fractal coding, and it is shown that standard Fractal coding is not an approximating algorithm for this problem.
Abstract: In fractal compression a signal is encoded by the parameters of a contractive transformation whose fix ed point (attractor) is an approximation of the original data. Thus fractal coding can be viewed as the optimization problem of finding in a set of admissible contractive transformations the transformation whose attractor is closest to a given signal. The standard fractal coding scheme based on the Collage Theorem produces only a suboptimal solution. We demonstrate by a reduction from MAXCUT that the problem of determining the optimal fractal code is NP-hard. To our knowledge, this is the first analysis of the intrinsic complexity of fractal coding. Additionally, we show that standard fractal coding is not an approximating algorithm for this problem.