scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
31 Mar 1996
TL;DR: LOCO-I as discussed by the authors combines the simplicity of Huffman coding with the compression potential of context models, thus "enjoying the best of both worlds." The algorithm is based on a simple fixed context model, which approaches the capability of the more complex universal context modeling techniques for capturing high-order dependencies.
Abstract: LOCO-I (low complexity lossless compression for images) is a novel lossless compression algorithm for continuous-tone images which combines the simplicity of Huffman coding with the compression potential of context models, thus "enjoying the best of both worlds." The algorithm is based on a simple fixed context model, which approaches the capability of the more complex universal context modeling techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with a collection of (context-conditioned) Huffman codes, which is realized with an adaptive, symbol-wise, Golomb-Rice code. LOCO-I attains, in one pass, and without recourse to the higher complexity arithmetic coders, compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. In fact, LOCO-I is being considered by the ISO committee as a replacement for the current lossless standard in low-complexity applications.

625 citations

Proceedings ArticleDOI
20 Sep 2007
TL;DR: The goal of this paper is to determine the steganographic capacity of JPEG images (the largest payload that can be undetectably embedded) with respect to current best steganalytic methods and to evaluate the influence of specific design elements and principles.
Abstract: The goal of this paper is to determine the steganographic capacity of JPEG images (the largest payload that can be undetectably embedded) with respect to current best steganalytic methods. Additionally, by testing selected steganographic algorithms we evaluate the influence of specific design elements and principles, such as the choice of the JPEG compressor, matrix embedding, adaptive content-dependent selection channels, and minimal distortion steganography using side information at the sender. From our experiments, we conclude that the average steganographic capacity of grayscale JPEG images with quality factor 70 is approximately 0.05 bits per non-zero AC DCT coefficient.

390 citations

Book ChapterDOI
10 Jul 2006
TL;DR: A novel steganalysis scheme is presented to effectively detect the advanced JPEG steganography and has outperformed the existing steganalyzers in attacking OutGuess, F5, and MB1.
Abstract: In this paper, a novel steganalysis scheme is presented to effectively detect the advanced JPEG steganography. For this purpose, we first choose to work on JPEG 2-D arrays formed from the magnitudes of quantized block DCT coefficients. Difference JPEG 2-D arrays along horizontal, vertical, and diagonal directions are then used to enhance changes caused by JPEG steganography. Markov process is applied to modeling these difference JPEG 2-D arrays so as to utilize the second order statistics for steganalysis. In addition to the utilization of difference JPEG 2-D arrays, a thresholding technique is developed to greatly reduce the dimensionality of transition probability matrices, i.e., the dimensionality of feature vectors, thus making the computational complexity of the proposed scheme manageable. The experimental works are presented to demonstrate that the proposed scheme has outperformed the existing steganalyzers in attacking OutGuess, F5, and MB1.

375 citations

Journal ArticleDOI
Zhigang Fan1, R.L. de Queiroz1
TL;DR: A fast and efficient method is provided to determine whether an image has been previously JPEG compressed, and a method for the maximum likelihood estimation of JPEG quantization steps is developed.
Abstract: Sometimes image processing units inherit images in raster bitmap format only, so that processing is to be carried without knowledge of past operations that may compromise image quality (e.g., compression). To carry further processing, it is useful to not only know whether the image has been previously JPEG compressed, but to learn what quantization table was used. This is the case, for example, if one wants to remove JPEG artifacts or for JPEG re-compression. In this paper, a fast and efficient method is provided to determine whether an image has been previously JPEG compressed. After detecting a compression signature, we estimate compression parameters. Specifically, we developed a method for the maximum likelihood estimation of JPEG quantization steps. The quantizer estimation method is very robust so that only sporadically an estimated quantizer step size is off, and when so, it is by one value.

373 citations

Journal ArticleDOI
01 Mar 2002
TL;DR: A novel steganographic method based on joint photographic expert-group (JPEG) that has a larger message capacity than Jpeg-Jsteg, and the quality of the stego-images of the proposed method is acceptable.
Abstract: In this paper, a novel steganographic method based on joint photographic expert-group (JPEG) is proposed. The proposed method modifies the quantization table first. Next, the secret message is hidden in the cover-image with its middle-frequency of the quantized DCT coefficients modified. Finally, a JPEG stego-image is generated. JPEG is a standard image and popularly used in Internet. The stego-image will not be suspected if we could apply a JPEG image to data hiding. We compare our method with a JPEG hiding-tool Jpeg-Jsteg. From the experimental results, we obtain that the proposed method has a larger message capacity than Jpeg-Jsteg, and the quality of the stego-images of the proposed method is acceptable. Besides, our method has the same security level as Jpeg-Jsteg.

366 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815