Topic
Lossless JPEG
About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.
Papers published on a yearly basis
Papers
More filters
••
05 Nov 2008TL;DR: This paper proposes a new approach to analyse the blocking periodicity by developing a linearly dependency model of pixel differences, constructing a probability map of each pixelpsilas belonging to this model, and finally extracting a peak window from the Fourier spectrum of the probability map.
Abstract: Since JPEG image format has been a popularly used image compression standard, tampering detection in JPEG images now plays an important role. The artifacts introduced by lossy JPEG compression can be seen as an inherent signature for compressed images. In this paper, we propose a new approach to analyse the blocking periodicity by, 1) developing a linearly dependency model of pixel differences, 2) constructing a probability map of each pixelpsilas belonging to this model, and 3) finally extracting a peak window from the Fourier spectrum of the probability map. We will show that, for single and double compressed images, their peakspsila energy distribution behave very differently. We exploit this property and derive statistic features from peak windows to classify whether an image has been tampered by cropping and recompression. Experimental results demonstrate the validity of the proposed approach.
38 citations
••
29 Oct 2008TL;DR: Experiments show that the proposed method outperforms the other methods in terms of capacity and security, and theoretical analysis to the histogram characteristics after steganography proves that PM1 used in JPEG images preserves the first-order statistical properties.
Abstract: Plus minus 1 (PM1) is an improved method to least significant bits (LSB)-based steganography techniques, which not only foils typical attacks against LSB-based techniques, but also provides high capacity. But how to apply it to JPEG images does not appear in literatures. In this paper, PM1 steganography in JPEG images using genetic algorithm (GA) is proposed, in which the GA is used to optimize the performance, such as minimizing blockiness. Theoretical analysis to the histogram characteristics after steganography is discussed in details, which proves that PM1 used in JPEG images preserves the first-order statistical properties. Experiments show that the proposed method outperforms the other methods in terms of capacity and security.
38 citations
••
TL;DR: The proposed algorithm addresses all three types of artifacts which are prevalent in JPEG images: blocking, and for edges blurring, and aliasing, and enhances the quality of the image via two stages.
Abstract: Transform coding using the discrete cosine transform is one of the most popular techniques for image and video compression. However, at low bit rates, the coded images suffer from severe visual distortions. An innovative approach is proposed that deals with artifacts in JPEG compressed images. Our algorithm addresses all three types of artifacts which are prevalent in JPEG images: blocking, and for edges blurring, and aliasing. We enhance the quality of the image via two stages. First, we remove blocking artifacts via boundary smoothing and guided filtering. Then, we reduce blurring and aliasing around the edges via a local edge-regeneration stage. We compared the proposed algorithm with other modern JPEG artifact-removal algorithms. The results demonstrate that the proposed approach is competitive, and can in many cases outperform, competing algorithms.
38 citations
••
24 Mar 1992TL;DR: In this study, the Bostelmann (1974) technique is studied for use at all resolutions, whereas in the arithmetic coded JPEG lossless, the technique is applied only at the 16-bit per pixel resolution.
Abstract: The JPEG lossless arithmetic coding algorithm and a predecessor algorithm called Sunset both employ adaptive arithmetic coding with the context model and parameter reduction approach of Todd et al. The authors compare the Sunset and JPEG context models for the lossless compression of gray-scale images, and derive new algorithms based on the strengths of each. The context model and binarization tree variations are compared in terms of their speed (the number of binary encodings required per test image) and their compression gain. In this study, the Bostelmann (1974) technique is studied for use at all resolutions, whereas in the arithmetic coded JPEG lossless, the technique is applied only at the 16-bit per pixel resolution. >
38 citations
••
01 Sep 2012TL;DR: The dithering operation will inevitably destroy the statistical correlations among the 8 × 8 intrablock and interblock within an image and the transition probability matrix of the DCT coefficients is employed for identifying the forged images from those original JPEG decompressed images and uncompressed ones.
Abstract: The quantization artifacts and blocking artifacts are the two significant properties in the JPEG compressed images. Most relative forensic techniques usually use such inherent properties to provide some evidences on how image data is acquired and/or processed. A wise attacker, however, may perform some post-operations to confuse the two artifacts to fool current forensic techniques. Recently, Stamm et al. in [1] propose a novel anti-JPEG compression method via adding anti-forensic dither to the DCT coefficients and further reducing the blocking artifacts. In this paper, we found that the dithering operation will inevitably destroy the statistical correlations among the 8 × 8 intrablock and interblock within an image. In the view of JPEG steganalysis, we employ the transition probability matrix of the DCT coefficients to measure such modifications for identifying the forged images from those original JPEG decompressed images and uncompressed ones. On average, we can obtain a detection accuracy as high as 99% on the image database of UCID [2].
38 citations