scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
25 Sep 1998
TL;DR: An adaptive-quantization approach which is compatible for the baseline JPEG is put forward, which statistically calculates the image's average probability distributivity, which is used as the threshold for the quantization.
Abstract: Using JPEG standard, color still images can be compressed in high ratios while good quality can be guaranteed. The framework of the JPEG is specified, but the recommended quantization table and the Huffman table can be pruned according to the image's characteristic. In this paper, we put forward an adaptive-quantization approach which is compatible for the baseline JPEG. In our approach, we statistically calculate the image's average probability distributivity, which used as the threshold for the quantization. The quanti-table recommended by JPEG can then be justified with the characteristic of images and the average code-rate is noticeably dropped.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
Book ChapterDOI
01 Jan 2014
TL;DR: This chapter describes methods for coding images without loss; this means that, after compression, the input signal can be exactly reconstructed.
Abstract: This chapter describes methods for coding images without loss; this means that, after compression, the input signal can be exactly reconstructed. It first reviews the motivations and requirements for lossless compression and justifies the use of variable length symbol encoding. It then focuses attention on two primary methods, Huffman coding and arithmetic coding. The algorithms, properties, and limitations of both methods are explored, together with some performance comparisons. A full example is provided based on the ubiquitous JPEG encoding standard and further details are provided by the way in which lossless coding methods are used in the HEVC and H.264/AVC standards, via CABAC and CAVLC. Examples are provided throughout.
Proceedings ArticleDOI
01 Oct 2017
TL;DR: A robust and blind watermarking algorithm based on matrix Singular Value Decomposition (SVD) perturbation theory for JPEG image copyright protection and has a higher degree of robustness against attacks such as Gaussian noise and JPEG compression.
Abstract: This paper proposes a robust and blind watermarking algorithm based on matrix Singular Value Decomposition (SVD) perturbation theory for JPEG image copyright protection. First, the quantized DCT coefficient blocks are extracted from the JPEG image, then SVD is performed on each block, and finally the watermark is embedded into a set of selected maximum singular values of the quantized Discrete Cosine Transform (DCT) coefficients using dither modulation. From the embedding principle of the watermarking algorithm, we can see that the watermark information can be completely extracted correctly when the watermarked image under certain attack. A large number of experiments show that the algorithm has a higher degree of robustness against attacks such as Gaussian noise and JPEG compression, etc.
Proceedings ArticleDOI
29 Mar 2011
TL;DR: It is found that the image quality of JPEG XR can be improved by about 0.2 to 0.4 dB by performing a rate-distortion optimal selection of the dead zone, this gain is seen to be comparable to the PSNR loss of a JPEG 2000 codec where, for experimental reasons, EBCOT rate control has been turned off.
Abstract: Similar to the JPEG image compression standard, the JPEG XR image compression solely controls the image quality loss and hence the output rate by means of the quantizer bucket sizes, a precise rate control mechanism like the EBCOT rate allocation algorithm in JPEG 2000 is not specified, and hence rate-distortion optimality of the quantizer is, in general, not given. In this work, a simple rate-control mechanism for JPEG XR is introduced that allows an efficient control of the quantizer towards rate-distortion optimality. One possibility to implement this quantizer control would be to use the spatial variable quantization feature of JPEG XR, but it was seen in an earlier work that the additional side information required to transmit the quantization setting almost compensates the PSNR gain of variable quantization and complicates the rate allocation process by requiring an additional quantizer allocation step. However, while JPEG XR defines the image reconstruction process completely, an encoder still has the freedom to select the dead zone size of the quantizer, this mechanism has the additional advantage that no additional side information needs to be transmitted and that the dead zone size is not, unlike the quantizer bucket size, constrained to a set of pre-defined values specified in the standard. It is found that the image quality of JPEG XR can be improved by about 0.2 to 0.4 dB by performing a rate-distortion optimal selection of the dead zone, this gain is seen to be comparable to the PSNR loss of a JPEG 2000 codec where, for experimental reasons, EBCOT rate control has been turned off.
Journal ArticleDOI
TL;DR: Tests and analysis results show that the losslessBCWT algorithm requires less memory and computational resources than SPIHT and JPEG2000, while retaining image quality comparable to the standard image codecs, therefore, lossless BCWT is quite suitable for implementation in modern digital technologies.
Abstract: Despite the approval of a new standard in 2009, JPEG-extended range lossless, current digital products still employ previous standards for lossless image compression, such as JPEG, JPEG2000, JPEG-LS, etc. Wavelet-based codecs can provide abundant functionalities and excellent compression efficiency. Among them, the backward coding of wavelet trees (BCWT) algorithm offers lower complexity and consumes less internal buffer memory without sacrificing quality at similar compression ratios (CR) when compared to other wavelet-based codecs, such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). A line-based BCWT was developed for further reduction of internal buffer memory. A very efficient line-based lossless BCWT compression algorithm is presented. Lossless color and lossless wavelet transform are employed and the original BCWT algorithm is modified for lossless operation, including incorporation of adaptive arithmetic coding. In order to eliminate coding redundancies, a set to zeros method and a zero tree detection algorithm are proposed, which significantly enhance the boundary condition CR performance while reserving the algorithm’s advantages. Tests and analysis results show that the lossless BCWT algorithm requires less memory and computational resources than SPIHT and JPEG2000, while retaining image quality comparable to the standard image codecs, therefore, lossless BCWT is quite suitable for implementation in modern digital technologies.

Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815