scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
27 Jun 2023
TL;DR: Based on the feature of images generated by a same fixed surveillance camera, a JPEG image lossless recompression method based on CABAC pre-coding, residual coefficients between JPEG image group and simplified context prediction is proposed by as mentioned in this paper .
Abstract: As the number of application of images on the Internet increases, how to store and transmit these images becomes a big challenge. JPEG as the most widely used image compression format on the Internet is often applied to pictures compression. However, just using JPEG alone to compress images is not enough now. In hence, some methods use improved entropy coding to further recompress JPEG images losslessly or process the images on DCT domain for lossy recompression. These methods are useful and work for various images. But there is no special design for fixed surveillance applications. Depending on the feature of images generated by a same fixed surveillance camera, a JPEG image lossless recompression method based on CABAC pre-coding, residual coefficients between JPEG image group and simplified context prediction is proposed by us. With a little reduction of decoding time as well as little increase of encoding time, average 27% bits saving can be achieved in the experiment.
Posted ContentDOI
24 Aug 2022
TL;DR: Zhang et al. as mentioned in this paper proposed a learned lossless JPEG transcoding framework via joint lossy and residual compression, which adaptively learns the distribution of residual DCT coefficients before compressing them using context-based entropy coding.
Abstract: As a commonly-used image compression format, JPEG has been broadly applied in the transmission and storage of images. To further reduce the compression cost while maintaining the quality of JPEG images, lossless transcoding technology has been proposed to recompress the compressed JPEG image in the DCT domain. Previous works, on the other hand, typically reduce the redundancy of DCT coefficients and optimize the probability prediction of entropy coding in a hand-crafted manner that lacks generalization ability and flexibility. To tackle the above challenge, we propose the learned lossless JPEG transcoding framework via Joint Lossy and Residual Compression. Instead of directly optimizing the entropy estimation, we focus on the redundancy that exists in the DCT coefficients. To the best of our knowledge, we are the first to utilize the learned end-to-end lossy transform coding to reduce the redundancy of DCT coefficients in a compact representational domain. We also introduce residual compression for lossless transcoding, which adaptively learns the distribution of residual DCT coefficients before compressing them using context-based entropy coding. Our proposed transcoding architecture shows significant superiority in the compression of JPEG images thanks to the collaboration of learned lossy transform coding and residual entropy coding. Extensive experiments on multiple datasets have demonstrated that our proposed framework can achieve about 21.49% bits saving in average based on JPEG compression, which outperforms the typical lossless transcoding framework JPEG-XL by 3.51%.
Book ChapterDOI
01 Jan 2012
TL;DR: Lossless data compression Based on Network Dictionary, proposed in this article, stores various types of dictionaries in the dedicated server, and compresses by means of full direct compression or Block Compression.
Abstract: In existing Dictionary-based compression methods, the dictionaries, whether Static or dynamically generated, are all in the local. Lossless data compression Based on Network Dictionary, proposed in this article, stores various types of dictionaries in the dedicated server, and compresses by means of full direct compression or Block Compression. It’s ideal compression efficiency close to 100%. Compared with the existing compression algorithms, the time cost for compressing in this method mainly depends on network speed and matching algorithm. With the continuous growth of network speed and the continuous improvement of matching algorithm, this method will bring a revolutionary change in the history of compression.

Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815