scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
05 Nov 1998
TL;DR: The Rice algorithm which uses no tables is shown to produce compression results comparable to JPEG with custom Huffman tables, and its state-of-the-art performance is compared to the more well known lossless JPEG compression algorithm.
Abstract: This paper describes a remarkable but relatively unknown algorithm invented by Robert Rice ofNASA's Jet PropulsionLaboratory for lossless compression of imagery and other scientific data collected by spaceborne sensors. Its state-of-the-artperformance is compared to the more well known lossless JPEG compression algorithm. Since lossless algorithms bydefmition produce perfectly reconstructed imagery, performance comparisons are based on the amount of compression eachalgorithm achieves. The JPEG algorithm uses Huffinan tables. For optimal performance the Huffman table used by JPEGmust be custom-designed based on the statistics ofthe image being coded. The Rice algorithm which uses no tables is shownto produce compression results comparable to JPEG with custom Huffman tables. Implementation ofthe Rice algorithmwhich requires only one pass is shown to be simpler than the implementation of custom lossless JPEG which requires twopasses through the image data. The effects of channel errors on Rice-encoded imagery are analyzed, revealing a probablyunintentional tendency toward self-correction of some errors.Keywords: lossless compression, Rice compression, lossless image compression

2 citations

Journal Article
TL;DR: This paper is a survey of current lossless techniques with results quoted for both sequential data files and still images.
Abstract: Lossless data compression systems allow an exact replica of the original data to be reproduced at the receiver. Lossless compression has found a wide range of applications in such diverse fields as: compression of computer data, still images (e.g., medical or graphical images) and video (usually, in the form of entropy coding of the output of intra/inter-frame lossy schemes). It has been studied for over forty years and new compression algorithms are still continuously developed. This paper is a survey of current lossless techniques with results quoted for both sequential data files and still images.

2 citations

Proceedings ArticleDOI
04 May 2014
TL;DR: An image compression algorithm called Weighted, Ratio-Based, Adaptive, Lossless image Codec (WRALIC), which attempts to guess the sign of the error from the sign context of the pixel.
Abstract: In this paper, we present an image compression algorithm called Weighted, Ratio-Based, Adaptive, Lossless image Codec (WRALIC). The algorithm utilizes 5 ratio predictions. The weight of each prediction is learned during a training stage offline, whereas the prediction parameters are adjusted using error context. The absolute value of the error is encoded. The algorithm does not encode the sign. Instead, it attempts to guess the sign of the error from the sign context of the pixel. Using the energy and the average errors around a pixel, the error is added to an encoding bin. Experimental results demonstrate good compression performance compared to other state of the art algorithms.

2 citations

Proceedings ArticleDOI
Gamal Fahmy1
01 Dec 2014
TL;DR: This project studies the effect of adopting Nonorthogonal Discrete Cosine Transform (NDCT) that is highly utilized in efficient media implementations, in detecting if different parts of the image have been modified, and measuring block convergence of different image parts and detecting its stability after recompressions.
Abstract: Many forensic techniques recently tried to detect the tampering and manipulation of JPEG compressed images that became a critical problem in different imaging applications. Some techniques indicated that a knowledge able attacker can make it very hard to detect image tampering, while others indicated that portions of the compressed image that has been compressed at different compression parameters can be detected, if they are recompressed after changing some of these parameters. In this project, we pursue the idea of analyzing forensically suspect-able images to detect forgery. We study the effect of adopting Nonorthogonal Discrete Cosine Transform (NDCT) that is highly utilized in efficient media implementations, in detecting if different parts of the image have been modified. This is performed by measuring block convergence of different image parts and detecting its stability after recompressions.

2 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815