scispace - formally typeset
Search or ask a question

Showing papers by "Hannes Hartenstein published in 1998"


Journal ArticleDOI
TL;DR: The optimal lower bound for the entropy of the differences of the reconstruction of near-lossless signal coding is given and, in addition, tighter bounds for some special cases are presented.
Abstract: In this letter we investigate near-lossless signal coding for which the reconstruction signal is required to have an absolute error in each component bounded by a tolerance /spl tau/. For differential coding no practical algorithm is known that computes an optimal reconstruction, i.e., one for which the sequence of consecutive differences has minimal entropy. In this letter we give the optimal lower bound for the entropy of the differences of the reconstruction and, in addition, present tighter bounds for some special cases.

10 citations


Proceedings ArticleDOI
01 Sep 1998
TL;DR: This work investigates image partitionings that are derived by a merge process starting with a uniform partition and discusses merging criteria that depend on variance or collage error and on the Euclidean length of the partition boundaries.
Abstract: For application in fractal coding we investigate image partitionings that are derived by a merge process starting with a uniform partition. At each merging step one would like to opt for the rate-distortion optimal choice. Unfortunately, this is computationally infeasible when efficient coders for the partition information are employed. Therefore, one has to use a model for estimating the coding costs. We discuss merging criteria that depend on variance or collage error and on the Euclidean length of the partition boundaries. Preliminary tests indicate that improved coding costs estimators may be of crucial importance for the success of our approach.

2 citations


Proceedings ArticleDOI
08 Jan 1998
TL;DR: This work has extended the trellis quantization (TQ) scheme by performing two-row joint optmizations instead of optimizing row by row, and indicates that the preference should be given to sophisticated prediction/modelling.
Abstract: Summary form only given. We discuss several variations to the original algorithm proposed by Ke and Marcellin (see Proc. IEEE ICIP, Washington DC, 1995). We have extended the trellis quantization (TQ) scheme by performing two-row joint optmizations instead of optimizing row by row. Unfortunately, while increasing the computation time quite a bit, this has lead only to marginal coding gains. A progressive probability update scheme has lead to much better convergence and to a 0.3 bpp gain over the original fixed scheme. When using lossy plus near-lossless coding the lossy version can be used for better context modelling without increasing the computational complexity of the near-lossless residual coding. Improvements of 0.1-0.2 bpp were observed. Since it is computationally infeasible to include more pixels to be determined by the TQ process, one has the choice of either using better prediction/context-modelling or doing TQ. Our tests indicate that the preference should be given to sophisticated prediction/modelling.

1 citations