scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article , a novel compression method based on partial differential equations complemented by block sorting and symbol prediction is presented, which is compared with the current standards, JPEG and JPEG 2000.
Abstract: In this paper, we present a novel compression method based on partial differential equations complemented by block sorting and symbol prediction. Block sorting is performed using the Burrows–Wheeler transform, while symbol prediction is performed using the context mixing method. With these transformations, the range coder is used as a lossless compression method. The objective and subjective quality evaluation of the reconstructed image illustrates the efficiency of this new compression method and is compared with the current standards, JPEG and JPEG 2000.

1 citations

Journal ArticleDOI
23 Mar 2023-Sensors
TL;DR: In this article , the authors evaluated the effects of JPEG compression on image classification using the Vision Transformer (ViT) and showed that the classification accuracy can be maintained at over 98% with a more than 90% reduction in the amount of image data.
Abstract: This paper evaluates the effects of JPEG compression on image classification using the Vision Transformer (ViT). In recent years, many studies have been carried out to classify images in the encrypted domain for privacy preservation. Previously, the authors proposed an image classification method that encrypts both a trained ViT model and test images. Here, an encryption-then-compression system was employed to encrypt the test images, and the ViT model was preliminarily trained by plain images. The classification accuracy in the previous method was exactly equal to that without any encryption for the trained ViT model and test images. However, even though the encrypted test images can be compressible, the practical effects of JPEG, which is a typical lossy compression method, have not been investigated so far. In this paper, we extend our previous method by compressing the encrypted test images with JPEG and verify the classification accuracy for the compressed encrypted-images. Through our experiments, we confirm that the amount of data in the encrypted images can be significantly reduced by JPEG compression, while the classification accuracy of the compressed encrypted-images is highly preserved. For example, when the quality factor is set to 85, this paper shows that the classification accuracy can be maintained at over 98% with a more than 90% reduction in the amount of image data. Additionally, the effectiveness of JPEG compression is demonstrated through comparison with linear quantization. To the best of our knowledge, this is the first study to classify JPEG-compressed encrypted images without sacrificing high accuracy. Through our study, we have come to the conclusion that we can classify compressed encrypted-images without degradation to accuracy.

1 citations

Journal ArticleDOI
TL;DR: The proposed MCFTIR methods significantly outperform existing techniques in terms of the parameters like Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) especially for the images of small sizes.
Abstract: Background: The existing JPEG error analysis schemes do not offer agreeable results particularly when the duplicated area is small. Region duplication is an uncomplicated and efficient process to produce digital image forgeries, where a constant segment of pixels in an image, following feasible geometrical and illumination transformations are copied and pasted to a different location in the same image. Methods: In this research work, JPEG error analysis scheme is introduced for the purpose of consistent recognition of duplicated and distorted areas in a JPEG digital image forensics. Here, presented a new Multi-directional Curvelet Transform with Fourier Transform matching Invariant Rotation (MCFTIR) region duplication detection scheme to identify duplicated regions for JPEG images. This scheme begins with estimating the overlapping blocks of a JPEG image and it is organized in accordance with the statistics of multiple curvelet sub-bands. During the second phase, the amount of candidate block pairs of JPEG images has been significantly diminished by means of spatial distance between each pair of blocks for JPEG image. For duplicate region removed images theoretically analyzing the effects of these errors on single and double JPEG compression, with five major phases like Shape-Preserving Image Resizing (SPIR) scheme for the purpose of image resizing, noises are appended to image and eliminated with the help of Hybrid Non-Local Means Filtering (HNLMF) denoising framework, Image compression through Discrete Cosine Transform – Singular Value Decomposition (DCT-SVD) was computed for single and double image compression, images were quantized by means of numerous quantization matrices, quantization matrix results are estimated with Mamdani model based Adaptive Neural Fuzzy Inference System (MANFIS) and detecting the quantization table of a JPEG image. Findings: The proposed MCFTIR methods significantly outperform existing techniques in terms of the parameters like Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) especially for the images of small sizes. It also show that the new MCFTIR method can consistently identify JPEG image blocks which are as tiny as 8x8 pixels and compressed through quality factors as elevated as 98. This performance is significant for the purpose of analyzing and locating small tampered regions inside a composite image.

1 citations

Proceedings ArticleDOI
22 Oct 2007
TL;DR: The proposed method implements motion compensation through of a two- stage context adaptive linear predictor that is robust to the local intensity changes and the noise that often degrades these image sequences, and provides lossless and near-lossless quality.
Abstract: This paper presents a context adaptive coding method for image sequences in hemodynamics. The proposed method implements motion compensation through of a two- stage context adaptive linear predictor. It is robust to the local intensity changes and the noise that often degrades these image sequences, and provides lossless and near-lossless quality. Our preliminary experiments with lossless compression of 12 bits/pixel studies indicate that, potentially, our approach can perform 3.8%, 2% and 1.6% better than JPEG-2000, JPEG-LS and the method proposed in [1], respectively. The performance tends to improve for near-lossless compression.

1 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815