scispace - formally typeset
Search or ask a question
Author

A. Frajka

Bio: A. Frajka is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Image compression & Residual. The author has an hindex of 1, co-authored 1 publications receiving 54 citations.

Papers
More filters
Proceedings ArticleDOI
10 Dec 2002
TL;DR: This paper proposes a new method for the coding of residual images that takes into account the properties of residual image properties, and demonstrates that it is possible to achieve good results with a computationally simple method.
Abstract: The main. focus of research in stereo image coding has been disparity estimation (DE), a technique used to reduce coding rate by taking advantage of the redundancy in a stereo image pair. Significantly less effort has been put into the coding of the residual image. In this paper we propose a new method for the coding of residual images that takes into account the properties of residual images. Particular attention is paid to the effects of occlusion and the correlation properties of residual images that result from block-based disparity estimation. The embedded, progressive nature of our coder allows one to stop decoding at any time. We demonstrate that it is possible to achieve good results with a computationally simple method.

55 citations


Cited by
More filters
Proceedings ArticleDOI
TL;DR: The design and implementation of a new stereoscopic image quality metric is described and it is suggested that it is a better predictor of human image quality preference than PSNR and could be used to predict a threshold compression level for stereoscope image pairs.
Abstract: We are interested in metrics for automatically predicting the compression settings for stereoscopic images so that we can minimize file size, but still maintain an acceptable level of image quality. Initially we investigate how Peak Signal to Noise Ratio (PSNR) measures the quality of varyingly coded stereoscopic image pairs. Our results suggest that symmetric, as opposed to asymmetric stereo image compression, will produce significantly better results. However, PSNR measures of image quality are widely criticized for correlating poorly with perceived visual quality. We therefore consider computational models of the Human Visual System (HVS) and describe the design and implementation of a new stereoscopic image quality metric. This, point matches regions of high spatial frequency between the left and right views of the stereo pair and accounts for HVS sensitivity to contrast and luminance changes at regions of high spatial frequency, using Michelson's Formula and Peli's Band Limited Contrast Algorithm. To establish a baseline for comparing our new metric with PSNR we ran a trial measuring stereoscopic image encoding quality with human subjects, using the Double Stimulus Continuous Quality Scale (DSCQS) from the ITU-R BT.500-11 recommendation. The results suggest that our new metric is a better predictor of human image quality preference than PSNR and could be used to predict a threshold compression level for stereoscopic image pairs.

167 citations

01 Jan 1994
TL;DR: This work shows how artihmetic coding works and describes an efficient implementation that uses table lookup as a fast alternative to arithmetic operations that has a provably negligible effect on the amount of compression achieved.
Abstract: Arithmetic coding provides an effective mechanism for removing redundancy in the encoding of data. We show how artihmetic coding works and describe an efficient implementation that uses table lookup as a fast alternative to arithmetic operations. The reduced-precision arithmetic has a provably negligible effect on the amount of compression achieved. We can speed up the implementation further by use of parallel processing

71 citations

Proceedings ArticleDOI
01 Oct 2019
TL;DR: This approach leverages state-of-the-art single-image compression autoencoders and enhances the compression with novel parametric skip functions to feed fully differentiable, disparity-warped features at all levels to the encoder/decoder of the second image.
Abstract: In this paper we tackle the problem of stereo image compression, and leverage the fact that the two images have overlapping fields of view to further compress the representations. Our approach leverages state-of-the-art single-image compression autoencoders and enhances the compression with novel parametric skip functions to feed fully differentiable, disparity-warped features at all levels to the encoder/decoder of the second image. Moreover, we model the probabilistic dependence between the image codes using a conditional entropy model. Our experiments show an impressive 30 - 50% reduction in the second image bitrate at low bitrates compared to deep single-image compression, and a 10 - 20% reduction at higher bitrates.

29 citations

Proceedings ArticleDOI
09 Jul 2009
TL;DR: This paper continues the researches on storage and bandwidth reduction for stereo images by using reversible watermarking by embedding into one frame of the stereo pair the information needed to recover the other frame, the transmission/storage requirements are halved.
Abstract: This paper continues our researches on storage and bandwidth reduction for stereo images by using reversible watermarking. By embedding into one frame of the stereo pair the information needed to recover the other frame, the transmission/storage requirements are halved. Furthermore, the content of the image remains available and one out of the two images is exactly recovered. The quality of the other frame depends on two features: the embedding bit-rate of the watermarking and the size of the information needed to be embedded. This paper focuses on the second feature. Instead of a simple residual between the two frames, a disparity compensation scheme is used. The advantage is twofold. First, the quality of the recovered frame is improved. Second, at detection, the disparity frame is immediately available for 3D computation. Experimental results on standard test images are provided.

24 citations

Journal ArticleDOI
TL;DR: This article investigates techniques for optimizing sparsity criteria by focusing on the use of an ℓ1 criterion instead of an⁓2 one, and proposes to jointly optimize the prediction filters by using an algorithm that alternates between the optimization of the filters and the computation of the weights.
Abstract: Lifting schemes (LS) were found to be efficient tools for image coding purposes. Since LS-based decompositions depend on the choice of the prediction/update operators, many research efforts have been devoted to the design of adaptive structures. The most commonly used approaches optimize the prediction filters by minimizing the variance of the detail coefficients. In this article, we investigate techniques for optimizing sparsity criteria by focusing on the use of an l1 criterion instead of an l2 one. Since the output of a prediction filter may be used as an input for the other prediction filters, we then propose to optimize such a filter by minimizing a weighted l1 criterion related to the global rate-distortion performance. More specifically, it will be shown that the optimization of the diagonal prediction filter depends on the optimization of the other prediction filters and vice-versa. Related to this fact, we propose to jointly optimize the prediction filters by using an algorithm that alternates between the optimization of the filters and the computation of the weights. Experimental results show the benefits which can be drawn from the proposed optimization of the lifting operators.

22 citations