scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
TL;DR: A variant of the JPEG baseline image compression algorithm optimized for images that were generated by a JPEG decompressor that inverts the computational steps of one particular JPEG decompression implementation (Independent JPEG Group, IJG), and uses interval arithmetic and an iterative process to infer the possible values of intermediate results during the decompression.
Abstract: We present a variant of the JPEG baseline image compression algorithm optimized for images that were generated by a JPEG decompressor. It inverts the computational steps of one particular JPEG decompressor implementation (Independent JPEG Group, IJG), and uses interval arithmetic and an iterative process to infer the possible values of intermediate results during the decompression, which are not directly evident from the decompressor output due to rounding. We applied our exact recompressor on a large database of images, each compressed at ten different quality factors. At the default IJG quality factor 75, our implementation reconstructed the exact quantized transform coefficients in 96% of the 64-pixel image blocks. For blocks where exact reconstruction is not feasible, our implementation can output transform-coefficient intervals, each guaranteed to contain the respective original value. Where different JPEG images decompress to the same result, we can output all possible bit-streams. At quality factors 90 and above, exact recompression becomes infeasible due to combinatorial explosion; but 68% of blocks still recompressed exactly.

8 citations

Proceedings ArticleDOI
26 Oct 1997
TL;DR: BACIC's compressed files are slightly smaller than JBIGs, twice as small as G3s, and at least thirty percent smaller than lossless-JPEGs (when lossless JPEG uses Huffman coding) for reduced grayscale images with fewer than 7 bits/pixel.
Abstract: BACIC is a new method of lossless bi-level image compression introduced to replace JBIG and G3, the current standards for bi-level and facsimile image compression. This paper applies the BACIC (block arithmetic coding for image compression) algorithm to reduced grayscale and full grayscale image compression. BACIC's compressed files are slightly smaller than JBIGs, twice as small as G3s, and at least thirty percent smaller than lossless-JPEGs (when lossless JPEG uses Huffman coding) for reduced grayscale images with fewer than 7 bits/pixel.

8 citations

Proceedings ArticleDOI
02 Jul 2007
TL;DR: The experimental results have demonstrated that the proposed scheme outperforms the existing steganalysis techniques in attacking modern JPEG steganographic schemes-F5, Outguess, MB1 and MB2.
Abstract: This paper presents a new steganalysis scheme to attack JPEG steganography. The 360 dimensional feature vectors sensitive to data embedding process are derived from multidirectional Markov models in the JPEG coefficients domain. The class-wise non-principal components analysis (CNPCA) is proposed to classify steganograpghy in the high-dimensional feature vector space. The experimental results have demonstrated that the proposed scheme outperforms the existing steganalysis techniques in attacking modern JPEG steganographic schemes-F5, Outguess, MB1 and MB2.

8 citations

Proceedings ArticleDOI
01 Dec 2012
TL;DR: In this paper, the authors present the implementation of the JPEG compression on a field programmable gate array as the data are streamed from the camera, and the goal was to minimize the logic resources of the FPGA and the latency at each stage of compression.
Abstract: This paper presents the implementation of the JPEG compression on a field programmable gate array as the data are streamed from the camera. The goal was to minimise the logic resources of the FPGA and the latency at each stage of compression. The modules of these architectures are fully pipelined to enable continuous operation on streamed data. The designed architectures are detailed in this paper and they were described in Handel-C. The compliance of each JPEG module was validated using MATLAB. The resulting JPEG compressor has a latency of 8 rows of image readout plus 154 clock cycles.

7 citations

Proceedings ArticleDOI
02 Nov 1997
TL;DR: A distortion-computation function D(C) is defined as the minimum expected distortion in computing some quantity-using an algorithm from a predefined set of algorithms-while using no more than C computational units.
Abstract: A distortion-computation function D(C) is defined as the minimum expected distortion in computing some quantity-using an algorithm from a predefined set of algorithms-while using no more than C computational units. When the computational problem is to encode at rate R, this gives slices of a computation-rate-distortion surface. This framework is used in the analysis of a family of JPEG coders that use output-pruned DCT calculations in place of some full DCT calculations. For encoding the Lena image at 0.5 bits/pixel, this yields a 30% reduction in complexity while lowering the PSNR by only 0.4 dB. The decoding complexity can be similarly reduced.

7 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815