scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
25 Oct 2010
TL;DR: This paper presents a software system for the evaluation of the quality of compressed JPEG images that can act on different stages of Jpeg and allow to check the resulting change in the quality and compression.
Abstract: Data compression is widely used in many scientific areas and in a transparent manner in various daily life activities. Lossy data compression, eg in JPEG images, leads to the loss of part of the original information but usually not essential. In this paper we present a software system for the evaluation of the quality of compressed JPEG images. This system can act on different stages of Jpeg and allow to check the resulting change in the quality and compression.

2 citations

Proceedings ArticleDOI
TL;DR: This work proposes a lossless image coding technique with a compression performance that is very close to the performance of CALIC and LOCO while being very efficient to implement both in hardware and software.
Abstract: Lossless coding of image data has been a very active area of research in the field of medical imaging, remote sensing and document processing/delivery. While several lossless image coders such as JPEG and JBIG have been in existence for a while, their compression performance for encoding continuous-tone images were rather poor. Recently, several state of the art techniques like CALIC and LOCO were introduced with significant improvement in compression performance over traditional coders. However, these coders are very difficult to implement using dedicated hardware or in software using media processors due to their inherently serial nature of their encoding process. In this work, we propose a lossless image coding technique with a compression performance that is very close to the performance of CALIC and LOCO while being very efficient to implement both in hardware and software. Comparisons for encoding the JPEG- 2000 image set show that the compression performance of the proposed coder is within 2 - 5% of the more complex coders while being computationally very efficient. In addition, the encoder is shown to be parallelizabl at a hierarchy of levels. The execution time of the proposed encoder is smaller than what is required by LOCO while the decoder is 2 - 3 times faster that the execution time required by LOCO decoder.© (1999) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

2 citations

Proceedings ArticleDOI
26 May 2013
TL;DR: Two Look-Up Coders (LUCs) are introduced that also offer bit-exact G.711 speech coding at reduced rates but the LUCs do not use arithmetic operations and hence eliminate the need for a processor.
Abstract: The lossless compression algorithm specified in ITU-T Recommendation G.711.0 provides bit-exact G.711 speech coding at reduced bit-rates. We introduce two Look-Up Coders (LUCs) that also offer bit-exact G.711 speech coding at reduced rates but the LUCs do not use arithmetic operations and hence eliminate the need for a processor. Instead they read in eight G.711 symbols, reinterpret those 64 bits to form eight new symbols that carry temporal information, then look up Huffman codes for those new symbols. When compared to G.711.0, LUC rates are 9% to 40% higher and they require 2 to 8 kB additional ROM, but LUCs eliminate about one million weighted arithmetic operations per second. LUCs reduce the 8 b/smpl G.711 rate to 3.8 to 6.7 b/smpl, depending on speech and noise levels.

2 citations

Book ChapterDOI
01 Jan 2003
TL;DR: The statistical analysis of video signals indicates that there is a strong correlation both between successive picture frames and within the picture elements themselves, so subjectively lossy compression techniques can be used to reduce video bit rates while maintaining an acceptable image quality.
Abstract: The statistical analysis of video signals indicates that there is a strong correlation both between successive picture frames and within the picture elements themselves. Theoretically, decorrelation of these signals can lead to bandwidth compression without significantly affecting image resolution. Moreover, the insensitivity of the human visual system to loss of certain spatio-temporal visual information can be exploited for further reduction. Hence, subjectively lossy compression techniques can be used to reduce video bit rates while maintaining an acceptable image quality. For coding still images, only the spatial correlation is exploited. Such a coding technique is called intraframe coding and is the basis for Joint Photographic Experts Group (JPEG) coding. If temporal correlation is exploited as well, then it is called interframe coding. Interframe predictive coding is the main coding principle that is used in all standard video codecs, such as H.261, H.263, H.264 and Motion Picture Experts Group (MPEG)-l, -2 and -4.

2 citations

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a method for recovering the JPEG quantization table relying only on the image information, without any metadata from the file header; thus the proposed method can be applied to an uncompressed image format to detect a previous JPEG compression.
Abstract: JPEG compression is a commonly used method of lossy compression for digital images. The degree of compression can be adjusted by the choice of a quality factor QF. Each software associates this value to a quantization table, which is an 8 x 8 matrix used to quantize the DCT coefficients of an image. We propose a method for recovering the JPEG quantization table relying only on the image information, without any metadata from the file header; thus the proposed method can be applied to an uncompressed image format to detect a previous JPEG compression. A statistical validation is used to decide whether significant quantization traces are found or not, and to provide a quantitative measure of the confidence on the detection.

2 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815