scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
01 Nov 2016
TL;DR: It is shown in this paper that on existing image quality databases, statistically, performance of many metrics is not better than the quality factor (Q) for JPEG images, as used in the popular implementation by the IJG (Independent JPEG Group).
Abstract: JPEG is still the most widely used image compression format. Perceptual quality assessment for JPEG images has been extensively studied for the last two centuries. While a large number of no-reference perceptual quality metrics have been proposed along the years, it is shown in this paper that on existing image quality databases, statistically, performance of many those metrics is not better than the quality factor (Q) for JPEG images, as used in the popular implementation by the IJG (Independent JPEG Group). It should be noted that Q or the quantization table computed from Q is almost always availabe at the decoder end so analysis is focused on no-reference or blind quality assessment metrics. This research highlights the fact that despite of many progress achieved in that area, JPEG quality assessment is still a topic worth revisiting and further investigation.

3 citations

01 Jan 2001
TL;DR: It is shown that one of the conditions for reaching minimum entropy using theMMSE is local stationarity, which makes adaptive coding feasible, which explains the reason behind the effectiveness of adaptive coding where the MMSE is pursuit rather than the MEE.
Abstract: Lossless coding is widely applied in medical image compression because of its feasibility. This thesis offers two major contributions for lossless image compression; (i) the relationship between the minimum-mean-squared-error (MMSE) and the minimum-entropy-of-error (MEE) prediction in lossless image compression has been revealed, and (ii) novel methods of improving compression rates and operation using Shape-Vector Quantization (VQ) have been presented. These new schemes have a simpler implementation, more computational efficiency, and lower memory requirement than other lossless schemes have. The proposed schemes are capable of providing significant coding improvement over traditional predictive coders and adaptive predictive coders. One major goal in any lossless image compression pursuit is to minimize the MEE. Realizing this goal is more valuable in terms of performance than minimizing the MMSE. Most predictive lossless coding techniques, however, are centered on the MMSE. The relationship between the MMSE and the MEE prediction and the limitation of linear prediction are the backbone of the Shape-VQ-based compression schemes introduced in this thesis. The concepts of the MMSE and the MEE are presented in detail and analyzed mathematically in the thesis. It is shown that one of the conditions for reaching minimum entropy using the MMSE is local stationarity, which makes adaptive coding feasible. This explains the reason behind the effectiveness of adaptive coding where the MMSE is pursuit rather than the MEE. Predictive techniques are well accepted in lossless image coding. The main advantages of predictive technique over other coding techniques are the simplicity of its encoder

3 citations

Proceedings ArticleDOI
12 Dec 2008
TL;DR: Experimental results show the proposed new approach of steganalysis based on frequency features from DCT coefficients works well on detecting popular DCT based steganography algorithms.
Abstract: In this paper, we proposed a new approach of steganalysis based on frequency features from DCT coefficients. This method does not require any training and can be widely used on JPEG images from various sources. We have applied our algorithm on images captured from digital camera,the Internet, and compressed images from lossless format. Experimental results show it works well on detecting popular DCT based steganography algorithms.

3 citations

Journal Article
TL;DR: The paper describes a real-time lossless data compression algorithm for both individual band and multi-spectral remote sensing images that uses prediction to reduce the redundancy and classical Huffman coding to reduction the average code length.
Abstract: The paper describes a real-time lossless data compression algorithm for both individual band and multi-spectral remote sensing images. This approach uses prediction to reduce the redundancy and classical Huffman coding to reduce the average code length. Several kinds of predictions were studied and the 2-D prediction was selected. Some simplifications were taken in the Huffman coding for real-time use. A comparison between the coding methods was discussed. The scheme was applied to the 8 bits/pixel, original images of LANDSAT TM, SPOT HRV, AIR SAR and imaging spectrometer by software emulation. An average compression ratio of 2, with the highest value of 3.1 and the lowest value of 1.2 for the radar images, was achieved.

3 citations

Proceedings ArticleDOI
27 May 2008
TL;DR: An approach toward joint source-channel coding (JSCC) for JPEG compressed image transmission is developed, which utilizes universal rate distortion characteristics, which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors.
Abstract: In this paper, we develop an approach toward joint source-channel coding (JSCC) for JPEG compressed image transmission. First, a data partition (DP) technique is employed to divide the JPEG bitstream into three groups, namely, Huffman codes, DC coefficients and AC coefficients. Second, we combine the rate-compatible punctured convolutional (RCPC) codes to protect the coded data according to channel conditions. The proposed scheme utilizes universal rate distortion characteristics, which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. Simulation results demonstrate a significant improvement in subjective and objective quality of the received images in an error-prone environment.

3 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815