scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel error-resilience tool for JPEG 2000 is proposed, based on the concept of ternary arithmetic coders employing a forbidden symbol, specifically tailored to the JPEG 2000 arithmetic coder, which are able to carry out both hard and soft decoding of a corrupted codestream.
Abstract: JPEG 2000 is the novel ISO standard for image and video coding. Besides its improved coding efficiency, it also provides a few error resilience tools in order to limit the effect of errors in the codestream, which can occur when the compressed image or video data are transmitted over an error-prone channel, as typically occurs in wireless communication scenarios. However, for very harsh channels, these tools often do not provide an adequate degree of error protection. In this paper, we propose a novel error-resilience tool for JPEG 2000, based on the concept of ternary arithmetic coders employing a forbidden symbol. Such coders introduce a controlled degree of redundancy during the encoding process, which can be exploited at the decoder side in order to detect and correct errors. We propose a maximum likelihood and a maximum a posteriori context-based decoder, specifically tailored to the JPEG 2000 arithmetic coder, which are able to carry out both hard and soft decoding of a corrupted codestream. The proposed decoder extends the JPEG 2000 capabilities in error-prone scenarios, without violating the standard syntax. Extensive simulations on video sequences show that the proposed decoders largely outperform the standard in terms of PSNR and visual quality.

20 citations

Journal ArticleDOI
TL;DR: The experimental results have demonstrated that the proposed scheme has satisfied the basic requirements of watermarking such as robustness and imperceptible, and can used to resist the JPEG attach and avoid the some weaknesses of JPEG quantification.
Abstract: A watermarking technique based on the frequency domain is presented in this paper. The one of the basic demands for the robustness in the watermarking mechanism should be able to dispute the JPEG attack since the JPEG is a usually file format for transmitting the digital content on the network. Thus, the proposed algorithm can used to resist the JPEG attach and avoid the some weaknesses of JPEG quantification. And, the information of the original host image and watermark are not needed in the extracting process. In addition, two important but conflicting parameters are adopted to trade-off the qualities between the watermarked image and the retrieve watermark. The experimental results have demonstrated that the proposed scheme has satisfied the basic requirements of watermarking such as robustness and imperceptible.

20 citations

Proceedings ArticleDOI
08 Oct 2000
TL;DR: An algorithm for evaluating the quality of JPEG compressed images, called the psychovisually-based image quality evaluator (PIQE), which measures the severity of artifacts produced by JPEG compression, shows that the PIQE model is most accurate in the compression range for which JPEG is most effective.
Abstract: We propose an algorithm for evaluating the quality of JPEG compressed images, called the psychovisually-based image quality evaluator (PIQE), which measures the severity of artifacts produced by JPEG compression. The PIQE evaluates the image quality using two psychovisually-based fidelity criteria: blockiness and similarity. The blockiness is an index that measures the patterned square artifact created as a by-product of the lossy DCT-based compression technique used by JPEG and MPEG. The similarity measures the perceivable detail remaining after compression. The blockiness and similarity are combined into a single PIQE index used to assess quality. The PIQE model is tuned by using subjective assessment results of five subjects on six sets of images. The results show that the PIQE model is most accurate in the compression range for which JPEG is most effective.

20 citations

Proceedings ArticleDOI
12 Nov 2007
TL;DR: This note explores correlation between adjacent rows (or columns) at the block boundaries for predicting DCT coefficients of the first row/column of DCT blocks to reduce the average JPEG DC residual for images compressed at the default quality level.
Abstract: The JPEG baseline algorithm follows a block-based coding approach and therefore, it does not explore source redundancy at the sub-block level. This note explores correlation between adjacent rows (or columns) at the block boundaries for predicting DCT coefficients of the first row/column of DCT blocks. Experimental results show that our prediction method reduces the average JPEG DC residual by about 75% for images compressed at the default quality level. The same for AC01/10 coefficients is about 30%. It reduces the final code bits by about 4.55% of the total image code for grey images. Our method can be implemented as a part of the JPEG codec without requiring any changes to its control structure or to its code stream syntax.

20 citations

Journal ArticleDOI
G. Lakhani1
TL;DR: Modifications to both the JPEG baseline encoder and decoder separately are studied, showing that modifications to the decoder alone do not reduce any compression losses, but if the encoder is also modified, the losses can be reduced, but only marginally.
Abstract: Although it is established that the distribution of the discrete cosine transform coefficients can be modeled, it is not known how this knowledge can best be applied to improve the JPEG compression algorithms. We studied this problem by making modifications to both the JPEG baseline encoder and decoder separately. Experimental results show that modifications to the decoder alone do not reduce any compression losses. However, if the encoder is also modified, the losses can be reduced, but only marginally.

20 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815