scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 2023"


Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed an end-to-end robust data hiding scheme for JPEG images, where the embedding and extracting secret messages on the quantized discrete cosine transform (DCT) coefficients are implemented by the bi-directional process of the invertible neural network (INN).

1 citations



Journal ArticleDOI
TL;DR: In this article , a novel compression method based on partial differential equations complemented by block sorting and symbol prediction is presented, which is compared with the current standards, JPEG and JPEG 2000.
Abstract: In this paper, we present a novel compression method based on partial differential equations complemented by block sorting and symbol prediction. Block sorting is performed using the Burrows–Wheeler transform, while symbol prediction is performed using the context mixing method. With these transformations, the range coder is used as a lossless compression method. The objective and subjective quality evaluation of the reconstructed image illustrates the efficiency of this new compression method and is compared with the current standards, JPEG and JPEG 2000.

1 citations


Journal ArticleDOI
23 Mar 2023-Sensors
TL;DR: In this article , the authors evaluated the effects of JPEG compression on image classification using the Vision Transformer (ViT) and showed that the classification accuracy can be maintained at over 98% with a more than 90% reduction in the amount of image data.
Abstract: This paper evaluates the effects of JPEG compression on image classification using the Vision Transformer (ViT). In recent years, many studies have been carried out to classify images in the encrypted domain for privacy preservation. Previously, the authors proposed an image classification method that encrypts both a trained ViT model and test images. Here, an encryption-then-compression system was employed to encrypt the test images, and the ViT model was preliminarily trained by plain images. The classification accuracy in the previous method was exactly equal to that without any encryption for the trained ViT model and test images. However, even though the encrypted test images can be compressible, the practical effects of JPEG, which is a typical lossy compression method, have not been investigated so far. In this paper, we extend our previous method by compressing the encrypted test images with JPEG and verify the classification accuracy for the compressed encrypted-images. Through our experiments, we confirm that the amount of data in the encrypted images can be significantly reduced by JPEG compression, while the classification accuracy of the compressed encrypted-images is highly preserved. For example, when the quality factor is set to 85, this paper shows that the classification accuracy can be maintained at over 98% with a more than 90% reduction in the amount of image data. Additionally, the effectiveness of JPEG compression is demonstrated through comparison with linear quantization. To the best of our knowledge, this is the first study to classify JPEG-compressed encrypted images without sacrificing high accuracy. Through our study, we have come to the conclusion that we can classify compressed encrypted-images without degradation to accuracy.

1 citations


Journal ArticleDOI
01 Jan 2023
TL;DR: In this paper , a hybrid coding framework for the lossless recompression of JPEG images (LLJPEG) using transform domain intra prediction is proposed, including block partition and intraprediction, transform and quantization, and entropy coding.
Abstract: JPEG, which was developed 30 years ago, is the most widely used image coding format, especially favored by the resource-deficient devices, due to its simplicity and efficiency. With the evolution of the Internet and the popularity of mobile devices, a huge amount of user-generated JPEG images are uploaded to social media sites like Facebook and Flickr or stored in personal computers or notebooks, which leads to an increase in storage cost. However, the performance of JPEG is far from the-state-of-the art coding methods. Therefore, the lossless recompression of JPEG images is urgent to be studied, which will further reduce the storage cost while maintaining the image fidelity. In this paper, a hybrid coding framework for the lossless recompression of JPEG images (LLJPEG) using transform domain intra prediction is proposed, including block partition and intraprediction, transform and quantization, and entropy coding. Specifically, in LLJPEG, intra prediction is first used to obtain a predicted block. Then the predicted block is transformed by DCT and then quantized to obtain the predicted coefficients. After that, the predicted coefficients are subtracted from the original coefficients to get the DCT coefficient residuals. Finally, the DCT residuals are entropy coded. In LLJPEG, some new coding tools are proposed for intra prediction and the entropy coding is redesigned. The experiments show that LLJPEG can reduce the storage space by 29.43% and 26.40% on the Kodak and DIV2K datasets respectively without any loss for JPEG images, while maintaining low decoding complexity.

1 citations


Journal ArticleDOI
TL;DR: In this article , the authors present a methodology to develop JPEG Snack Player, which is based on the instructions in the Snack file and renders media objects on the background JPEG file according to the instructions.
Abstract: The advancement in mobile communication and technologies has led to the usage of short-form digital content increasing daily. This short-form content is mainly based on images that urged the joint photographic experts’ group (JPEG) to introduce a novel international standard, JPEG Snack (International Organization for Standardization (ISO)/ International Electrotechnical Commission (IEC) IS, 19566-8). In JPEG Snack, the multimedia content is embedded into a main background JPEG file, and the resulting JPEG Snack file is saved and transmitted as a .jpg file. If someone does not have a JPEG Snack Player, their device decoder will treat it as a JPEG file and display a background image only. As the standard has been proposed recently, the JPEG Snack Player is needed. In this article, we present a methodology to develop JPEG Snack Player. JPEG Snack Player uses a JPEG Snack decoder and renders media objects on the background JPEG file according to the instructions in the JPEG Snack file. We also present some results and computational complexity metrics for the JPEG Snack Player.

Proceedings ArticleDOI
27 Jun 2023
TL;DR: Based on the feature of images generated by a same fixed surveillance camera, a JPEG image lossless recompression method based on CABAC pre-coding, residual coefficients between JPEG image group and simplified context prediction is proposed by as mentioned in this paper .
Abstract: As the number of application of images on the Internet increases, how to store and transmit these images becomes a big challenge. JPEG as the most widely used image compression format on the Internet is often applied to pictures compression. However, just using JPEG alone to compress images is not enough now. In hence, some methods use improved entropy coding to further recompress JPEG images losslessly or process the images on DCT domain for lossy recompression. These methods are useful and work for various images. But there is no special design for fixed surveillance applications. Depending on the feature of images generated by a same fixed surveillance camera, a JPEG image lossless recompression method based on CABAC pre-coding, residual coefficients between JPEG image group and simplified context prediction is proposed by us. With a little reduction of decoding time as well as little increase of encoding time, average 27% bits saving can be achieved in the experiment.



Posted ContentDOI
11 Jan 2023
TL;DR: In this article , a simple invertible extension for JPEG 2000 that can reduce the filesize for lossless coding of the high-pass band by 0.8% on average with peak rate saving of 1.1%.
Abstract: Lossless image coding is a crucial task especially in the medical area, e.g., for volumes from Computed Tomography or Magnetic Resonance Tomography. Besides lossless coding, compensated wavelet lifting offers a scalable representation of such huge volumes. While compensation methods increase the details in the lowpass band, they also vary the characteristics of the wavelet coefficients, so an adaption of the coefficient coder should be considered. We propose a simple invertible extension for JPEG 2000 that can reduce the filesize for lossless coding of the highpass band by 0.8% on average with peak rate saving of 1.1%.

Posted ContentDOI
24 Feb 2023
TL;DR: In this article , the effect of lossy image compression on a state-of-the-art face recognition model, and on multiple face image quality assessment models, was investigated over a range of specific image target sizes.
Abstract: Lossy face image compression can degrade the image quality and the utility for the purpose of face recognition. This work investigates the effect of lossy image compression on a state-of-the-art face recognition model, and on multiple face image quality assessment models. The analysis is conducted over a range of specific image target sizes. Four compression types are considered, namely JPEG, JPEG 2000, downscaled PNG, and notably the new JPEG XL format. Frontal color images from the ColorFERET database were used in a Region Of Interest (ROI) variant and a portrait variant. We primarily conclude that JPEG XL allows for superior mean and worst case face recognition performance especially at lower target sizes, below approximately 5kB for the ROI variant, while there appears to be no critical advantage among the compression types at higher target sizes. Quality assessments from modern models correlate well overall with the compression effect on face recognition performance.

Journal ArticleDOI
TL;DR: In this article , a secret JPEG image sharing approach is proposed, where Shamir's secret sharing scheme over Galois fields is used during the JPEG Huffman coding step, ensuring the visual security of the secret image in the compressed domain.
Abstract: With the rise of exchanges over the cloud and social networks, JPEG images have taken an important place in world image transmission and storage. In order to avoid security breaches and combat threats on the Internet, numerous JPEG image security methods have been proposed in both the academic and industrial communities. Encryption methods have been specifically designed to make JPEG images visually secure using so called crypto-compression methods. The drawback of crypto-compression is that it depends on only one secret key. Indeed, if this key is lost, so is the totality of the content of the secret original image. In this paper, we propose a secret JPEG image sharing approach. Shamir’s secret sharing scheme over Galois fields is used during the JPEG Huffman coding step, ensuring the visual security of the secret image in the compressed domain, while solving the issue of secret key loss. In addition, we also describe an eco-friendly scenario, dealing with a public shared JPEG image. In this scenario, we can obtain compressed shares because there is no need to duplicate the redundant information. According to our obtained results, our approach is fully format compliant and size preserving when compared to a standard JPEG compression of a secret original image, while ensuring the visual security of the content of the secret original image.

Journal ArticleDOI
01 Mar 2023-Entropy
TL;DR: In this article , a new approach for lossless raster image compression employing interpolative coding was proposed, which can be implemented in less than 60 lines of programming code for the coder and 60 lines for the decoder.
Abstract: A new approach is proposed for lossless raster image compression employing interpolative coding. A new multifunction prediction scheme is presented first. Then, interpolative coding, which has not been applied frequently for image compression, is explained briefly. Its simplification is introduced in regard to the original approach. It is determined that the JPEG LS predictor reduces the information entropy slightly better than the multi-functional approach. Furthermore, the interpolative coding was moderately more efficient than the most frequently used arithmetic coding. Finally, our compression pipeline is compared against JPEG LS, JPEG 2000 in the lossless mode, and PNG using 24 standard grayscale benchmark images. JPEG LS turned out to be the most efficient, followed by JPEG 2000, while our approach using simplified interpolative coding was moderately better than PNG. The implementation of the proposed encoder is extremely simple and can be performed in less than 60 lines of programming code for the coder and 60 lines for the decoder, which is demonstrated in the given pseudocodes.

Posted ContentDOI
05 Mar 2023
TL;DR: In this article , an autoencoder-like architecture is designed based on the weight-shared blocks to realize entropy modeling of grouped DCT coefficients and independently compress the priors.
Abstract: JPEG images can be further compressed to enhance the storage and transmission of large-scale image datasets. Existing learned lossless compressors for RGB images cannot be well transferred to JPEG images due to the distinguishing distribution of DCT coefficients and raw pixels. In this paper, we propose a novel framework for learned lossless compression of JPEG images that achieves end-to-end optimized prediction of the distribution of decoded DCT coefficients. To enable learning in the frequency domain, DCT coefficients are partitioned into groups to utilize implicit local redundancy. An autoencoder-like architecture is designed based on the weight-shared blocks to realize entropy modeling of grouped DCT coefficients and independently compress the priors. We attempt to realize learned lossless compression of JPEG images in the frequency domain. Experimental results demonstrate that the proposed framework achieves superior or comparable performance in comparison to most recent lossless compressors with handcrafted context modeling for JPEG images.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a nonlinear inverse transform network (iTNet) to learn the nonlinear mapping from DCT coefficients to their corresponding original pixels, which is against the linear mapping in previous DCT networks.
Abstract: JPEG is the most widely used image compression format, especially with the popularity of mobile or portable devices. However, the quality of a decoded JPEG image is usually degraded by compression artifacts such as blocking effect and ringing effect, especially at low bit rates. Recently, some convolutional neural network (CNN) based methods have been designed to solve the above problem. These methods take the problem as post-processing and only add a CNN-based post-processing network after a JPEG decoder to improve the image quality. In this paper, the JPEG decoding with nonlinear inverse transform network and progressive recurrent residual network (dubbed as JDNet) is proposed. JDNet can reconstruct JPEG images of different quality factors (QF) with only one model. In JDNet, first, the CNN-based inverse transform network (iTNet) is proposed to learn the nonlinear mapping from DCT coefficients to their corresponding original pixels, which is against the linear mapping in previous DCT networks. iTNet can reduce error propagation during inverse DCT and obtain more accurate reconstruction. Furthermore, iTNet can be combined with any JPEG post-processing method to improve its performance. Second, the progressive recurrent residual network (PRRN) is proposed for local feature extraction in the designed post-processing network which utilizes local and nonlocal similarities in multi-scale space (LNLMS) to further enhance the decoded image quality. The experimental results show that compared with JPEG, ARCNN, DnCNN and STRRN, the average gains of JDNet are 1.88dB, 0.63dB, 0.35dB, 0.24dB on Live1 dataset, 2.61dB, 1.11dB, 0.69dB and 0.29dB on Urban100 dataset, and 1.91dB, 0.69dB, 0.37dB and 0.13dB on BSD500 dataset, respectively.


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a multivariant Mixture distribution Channel-conditioning model (MMCC) in their network architecture to improve the performance of lossless image compression.
Abstract: Lossless image compression is an important research field in image compression. Recently, learning-based lossless image compression methods achieved impressive performance compared with traditional lossless methods, such as WebP, JPEG2000, and FLIF. The aim of the lossless image compression algorithms is to use shorter codelength to represent images. To encode an image with fewer bytes, eliminating the redundancies among the pixels in the image is highly important. Hence, in this paper, we explore the idea of combining an autoregressive model for the raw images based on the end-to-end lossless architecture proposed to enhance the performance. Furthermore, inspired by the successful achievements of Channel-conditioning models, we propose a Multivariant Mixture distribution Channel-conditioning model (MMCC) in our network architecture to boost performance. The experimental results show that our approach outperforms most classical lossless compression methods and existing learning-based lossless methods.


Journal ArticleDOI
TL;DR: In this paper , the authors focused on the hierarchical lossless video compression methods that consist of two layers: the EL layer is used to code the error's information realized by both the transformation and the quantization.
Abstract: Several lossless video compression methods have been developed and published in the literature. The authors focused on the hierarchical lossless video compression methods that consist of two layers: The EL layer is used to code the error's information realized by both the transformation and the quantization. The BL layer contains the common chain of the H264/AVC standard's lossy coding. They integrated some features into two-layer lossless video compression in order to enhance their performance. The simulation results demonstrated that, in comparison to earlier work, the approach reduces the total bit of the coded sequence.

Proceedings ArticleDOI
04 Jun 2023
TL;DR: In this article , the effect of lossy image compression on a state-of-the-art face recognition model, and on multiple face image quality assessment models, was investigated over a range of specific image target sizes.
Abstract: Lossy face image compression can degrade the image quality and the utility for the purpose of face recognition. This work investigates the effect of lossy image compression on a state-of-the-art face recognition model, and on multiple face image quality assessment models. The analysis is conducted over a range of specific image target sizes. Four compression types are considered, namely JPEG, JPEG 2000, downscaled PNG, and notably the new JPEG XL format. Frontal color images from the ColorFERET database were used in a Region Of Interest (ROI) variant and a portrait variant. We primarily conclude that JPEG XL allows for superior mean and worst case face recognition performance especially at lower target sizes, below approximately 5kB for the ROI variant, while there appears to be no critical advantage among the compression types at higher target sizes. Quality assessments from modern models correlate well overall with the compression effect on face recognition performance.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a high-capacity and robust JPEG steganography based on adversarial training called HRJS, which implements an end-to-end framework in the JPEG domain for the first time.
Abstract: JPEG steganography has become a research hotspot in the field of information hiding. However, the capacity of conventional JPEG steganography methods is hard to meet the requirements in high-capacity application scenarios and also can not extract secret messages accurately after JPEG compression. To mitigate these problems, we propose a high-capacity and robust JPEG steganography based on adversarial training called HRJS, which implements an end-to-end framework in the JPEG domain for the first time. The encoder is responsible for embedding the secret message while the decoder can reconstruct the original secret message. To enhance robustness, an attack module forces the neural network to automatically learn how to correctly recover the secret message after an attack. Experimental results show that our method achieves near 100 % decoding accuracy against JPEG_50 compression at 1/3 bits per channel (bpc) payload while preserving the imperceptibility of the stego image. Compared with conventional JPEG steganography methods, the proposed method is feasible with high capacity (e.g., 1 bpc) and has an obvious advantage in terms of robustness against JPEG compression at the same time.