scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Journal ArticleDOI
TL;DR: The contribution of this work is to incorporate a simple mathematical series with Baseline JPEG before applying optimal encoding and perform a Selective Quantization that essentially does not loss any information after decompression but reduces the redundant data in DCT domain.
Abstract: In today’s communicative and multimedia computing world, JPEG images play a vast consequential role. The JPEG images have been able to satisfy the users by fulfilling their demand of preserving numerous digital images within considerably less storage space. Although the JPEG standard offers four different sorts of compression mechanism, among them the Baseline JPEG or Lossy Sequential DCT Mode of JPEG is most popular since it can store a digital image by temporarily removing its psychovisual redundancy and thereby offering a very less storage space for a large image. Again, the computational complexity of Baseline JPEG is also considerably less as compression takes place in Discrete Cosine Transform domain. Therefore, Baseline JPEG is substantially useful while storing, sharing and transmitting digital images. Despite removing a large amount of psychovisual redundancy, the Baseline JPEG still contains redundant data in DCT domain. This paper explores the fact and introduces an improved technique that modifies the Baseline JPEG algorithm. It describes a way to further compress a JPEG image without any additional loss while achieving a better compression ratio than that is achievable by Baseline JPEG. The contribution of this work is to incorporate a simple mathematical series with Baseline JPEG before applying optimal encoding and perform a Selective Quantization that essentially does not loss any information after decompression but reduces the redundant data in DCT domain. The proposed technique is tested on over 200 textbook images that are extensively used for testing standard Image Processing and Computer Vision algorithms. The experimental results show that our proposed approach achieves 2:15% and 14:10% better compression ratio than that is achieved by Baseline JPEG on an average for gray-scale and true-color images respectively.

8 citations

Proceedings Article
Christoph Stamm1
01 Jan 2002
TL;DR: A new image file format, called Progressive Graphics File (PGF), which is based on a discrete wavelet transform with progressive coding features, which is the best of the tested algorithms for compression of natural images and aerial photos.
Abstract: We present a new image file format, called Progressive Graphics File (PGF), which is based on a discrete wavelet transform with progressive coding features. We show all steps of a transform based coder in detail and discuss some important aspects of our careful implementation. PGF can be used for lossless and lossy compression. It performs best for natural images and aerial ortho-photos. For these types of images it shows in its lossy compression mode a better compression efficiency than JPEG. This efficiency gain is almost for free, because the encoding and decoding times are only marginally longer. We also compare PGF with JPEG 2000 and show that JPEG 2000 is about ten times slower than PGF. In its lossless compression mode PGF has a slightly worse compression efficiency than JPEG 2000, but a clearly better compression efficiency than JPEG-LS and PNG. If both, compression efficiency and runtime, is important, then PGF is the best of the tested algorithms for compression of natural images and aerial photos.

8 citations

Proceedings ArticleDOI
08 Nov 2008
TL;DR: A modified AIC (M-AIC) is proposed by replacing the CABAC in AIC with a Huffman coder and an adaptive arithmetic coder that performs much better than JPEG, close to JPEG-2000 and AIC, and a little bit better than AIC in some low bit rate range.
Abstract: JPEG is a popular DCT-based still image compression standard, which has played an important role in image storage and transmission since its development. Several papers have been published to improve the performance of JPEG. Advanced image coding (AIC) combines intra frame block prediction from H.264 with a JPEG-style DCT, followed by context adaptive binary arithmetic coding (CABAC) used in H.264. It performs much better than JPEG and close to JPEG-2000. In this paper, we propose a modified AIC (M-AIC) by replacing the CABAC in AIC with a Huffman coder and an adaptive arithmetic coder. The simulation results demonstrate that M-AIC performs much better than JPEG, close to JPEG-2000 and AIC, and a little bit better than AIC in some low bit rate range.

8 citations

Proceedings ArticleDOI
01 Sep 2013
TL;DR: A novel technique able to recover the coefficients of the first compression in a double compressed JPEG image under some assumptions is proposed and Experimental results and comparisons with state of the art methods confirm the effectiveness of the proposed approach.
Abstract: To assess if a digital image has been (or not) doubly compressed is a challenging issue especially in forensics domain where could be fundamental clarify if, in addition to the compression at the time of shooting, the picture was decompressed (in some way) and then resaved. This is not a clear indication of forgery, but it guarantees that the image, probably, is not the original one. In this paper we propose a novel technique able to recover the coefficients of the first compression in a double compressed JPEG image under some assumptions. The proposed approach exploits how successive quantizations followed by dequantizations introduce some regularities (e.g., sequence of zero and not zero values) on the histograms of coefficient distributions that could be analyzed to recover the original compression parameters. Experimental results and comparisons with state of the art methods confirm the effectiveness of the proposed approach.

8 citations

Journal ArticleDOI
TL;DR: In order to improve RDLS effects, a heuristic for image-adaptive denoising filter selection, a fast estimator of the compressed image bitrate, and a special filter that may result in skipping of the steps are proposed.
Abstract: Reversible denoising and lifting steps (RDLS) are lifting steps integrated with denoising filters in such a way that, despite the inherently irreversible nature of denoising, they are perfectly reversible. We investigated the application of RDLS to reversible color space transforms: RCT, YCoCg-R, RDgDb, and LDgEb. In order to improve RDLS effects, we propose a heuristic for image-adaptive denoising filter selection, a fast estimator of the compressed image bitrate, and a special filter that may result in skipping of the steps. We analyzed the properties of the presented methods, paying special attention to their usefulness from a practical standpoint. For a diverse image test-set and lossless JPEG-LS, JPEG 2000, and JPEG XR algorithms, RDLS improves the bitrates of all the examined transforms. The most interesting results were obtained for an estimation-based heuristic filter selection out of a set of seven filters; the cost of this variant was similar to or lower than the transform cost, and it improved the average lossless JPEG 2000 bitrates by 2.65% for RDgDb and by over 1% for other transforms; bitrates of certain images were improved to a significantly greater extent.

8 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815