scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Journal ArticleDOI
TL;DR: The (de)coding method is proposed as part of JBIG2, an emerging international standard for lossless/lossy compression of bilevel images and is aimed at material containing half-toned images as a supplement to the specialized soft pattern matching techniques that work better for text.
Abstract: We present general and unified algorithms for lossy/lossless coding of bilevel images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template. For better compression, the more general free tree may be used. Loss may be introduced in a preprocess on the encoding side to increase compression. The primary algorithm is a rate-distortion controlled greedy flipping of pixels. Though being general, the algorithms are primarily aimed at material containing half-toned images as a supplement to the specialized soft pattern matching techniques that work better for text. Template based refinement coding is applied for lossy-to-lossless refinement. Introducing only a small amount of loss in half-toned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG2, an emerging international standard for lossless/lossy compression of bilevel images.

20 citations

Book ChapterDOI
01 Jan 2012
TL;DR: This chapter examines a number of schemes used for lossless compression of images, highlighting schemes for compression of grayscale and color images as well as schemes for compressed binary images that are a part of international standards.
Abstract: This chapter examines a number of schemes used for lossless compression of images. It highlights schemes for compression of grayscale and color images as well as schemes for compression of binary images. Among these schemes are several that are a part of international standards. The joint photographic experts group (JPEG) is a joint ISO/ITU committee responsible for developing standards for continuous-tone still-picture coding. The more famous standard produced by this group is the lossy image compression standard. However, at the time of the creation of the famous JPEG standard, the committee also created a lossless standard. The old JPEG lossless still compression standard provides eight different predictive schemes from which the user can select. In addition, the context adaptive lossless image compression (CALIC) scheme, which came into being in response to a call for proposal for a new lossless image compression scheme in 1994, uses both context and prediction of the pixel values. The CALIC scheme actually functions in two modes, one for gray-scale images, and another for bi-level images. One of the approaches used by CALIC to reduce the size of its alphabet is to use a modification of a technique called recursive indexing. Recursive indexing is a technique for representing a large range of numbers using only a small set.

20 citations

Patent
18 Nov 1999
TL;DR: In this paper, the authors proposed a solution to compress picture data having a high compression ratio by calculating a prediction value by using the pixel value of a pixel on which a color filter of the same color component as that of the pixel of interest is arranged as the pixel values of a near-by pixel.
Abstract: PROBLEM TO BE SOLVED: To compress picture data having a high compression ratio by calculating a prediction value by using the pixel value of a pixel on which a color filter of the same color component as that of a pixel of interest is arranged as the pixel value of a near-by pixel. SOLUTION: An encoding processor 10 digitizes a picture signal inputted from an image input device 1, compresses the picture data by JPEG lossless encoding using both of DPCM encoding and entropy encoding and stores the compressed picture data in a storage medium 2. In the case of DPCM encoding/ decoding picture data highly accurately acquired by the input device 1, a temporary prediction value of a noticed pixel is calculated based on a prediction expression using the pixel value of an adjacent pixel on which a color filter of a different color component is arranged and the temporary prediction value is also calculated based on a prediction expression using the pixel value of a pixel on which a color filter of the same color component as that of the pixel of interest is arranged so as to determine a prediction expression minimizing a prediction error from the pixel value as an optimum prediction expression, the picture data can be compressed in a higher compression ratio.

20 citations

01 Jan 2013
TL;DR: In this paper, a detailed analysis and performance comparison of HEVC intra coding with JPEG and JPEG 2000 (both 4:2:0 and 4:4:4 configurations) via a series of subjective and objective evaluations is presented.
Abstract: High Efficiency Video Coding (HEVC) demonstrates a significant improvement in compression efficiency compared to H.264/MPEG-4 AVC, especially for video with resolution beyond HD, such as 4K UHDTV. One advantage of HEVC is the improved intra coding of video frames. Hence, it is natural to question how such intra coding compares to state of the art compression codecs for still images. This paper attempts to answer this question by providing a detailed analysis and performance comparison of HEVC intra coding with JPEG and JPEG 2000 (both 4:2:0 and 4:4:4 configurations) via a series of subjective and objective evaluations. The evaluation results demonstrate that HEVC intra coding outperforms standard codecs for still images with the average bit rate reduction ranging from 16% (compared to JPEG 2000 4:4:4) up to 43% (compared to JPEG). These findings imply that both still images and moving pictures can be efficiently compressed by the same coding algorithm with higher compression efficiency.

20 citations

Journal ArticleDOI
G. Lakhani1
TL;DR: Four modifications to the JPEG arithmetic coding (JAC) algorithm are presented, which obtain extra-ordinary amount of code reduction without adding any kind of losses, and the compression performance of the modified JPEG with JPEG XR, the latest block-based image coding standard is compared.
Abstract: This article presents four modifications to the JPEG arithmetic coding (JAC) algorithm, a topic not studied well before. It then compares the compression performance of the modified JPEG with JPEG XR, the latest block-based image coding standard. We first show that the bulk of inter/intra-block redundancy, caused due to the use of the block-based approach by JPEG, can be captured by applying efficient prediction coding. We propose the following modifications to JAC to take advantages of our prediction approach. 1) We code a totally different DC difference. 2) JAC tests a DCT coefficient by considering its bits in the increasing order of significance for coding the most significant bit position. It causes plenty of redundancy because JAC always begins with the zeroth bit. We modify this coding order and propose alternations to the JPEG coding procedures. 3) We predict the sign of significant DCT coefficients, a problem is not addressed from the perspective of the JPEG decoder before. 4) We reduce the number of binary tests that JAC codes to mark end-of-block. We provide experimental results for two sets of eight-bit gray images. The first set consists of nine classical test images mostly of size 512 ntn512 pixels. The second set consists of 13 images of size 2000ntn3000 pixels or more. Our modifications to JAC obtain extra-ordinary amount of code reduction without adding any kind of losses. More specifically, when we quantize the images using the default quantizers, our modifications reduce the total JAC code size of the images of these two sets by about 8.9 and 10.6%, and the JPEG Huffman code size by about 16.3 and 23.4%, respectively, on the average. Gains are even higher for coarsely quantized images. Finally, we compare the modified JAC with two settings of JPEG XR, one with no block overlapping and the other with the default transform (we denote them by JXR0 and JXR1, respectively). Our results show that for the finest quality rate image coding, the modified JAC compresses the large set images by about 5.8% more than JXR1 and by 6.7% more than JXR0, on the average. We provide some rate-distortion plots on lossy coding, which show that the modified JAC distinctly outperforms JXR0, but JXR1 beats us by about a similar margin.

20 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815