Topic
Lossless JPEG
About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: A variant of the JPEG baseline image compression algorithm optimized for images that were generated by a JPEG decompressor that inverts the computational steps of one particular JPEG decompression implementation (Independent JPEG Group, IJG), and uses interval arithmetic and an iterative process to infer the possible values of intermediate results during the decompression.
Abstract: We present a variant of the JPEG baseline image compression algorithm optimized for images that were generated
by a JPEG decompressor. It inverts the computational steps of one particular JPEG decompressor implementation
(Independent JPEG Group, IJG), and uses interval arithmetic and an iterative process to infer the possible
values of intermediate results during the decompression, which are not directly evident from the decompressor
output due to rounding. We applied our exact recompressor on a large database of images, each compressed at
ten different quality factors. At the default IJG quality factor 75, our implementation reconstructed the exact
quantized transform coefficients in 96% of the 64-pixel image blocks. For blocks where exact reconstruction
is not feasible, our implementation can output transform-coefficient intervals, each guaranteed to contain the
respective original value. Where different JPEG images decompress to the same result, we can output all possible
bit-streams. At quality factors 90 and above, exact recompression becomes infeasible due to combinatorial
explosion; but 68% of blocks still recompressed exactly.
8 citations
••
26 Oct 1997TL;DR: BACIC's compressed files are slightly smaller than JBIGs, twice as small as G3s, and at least thirty percent smaller than lossless-JPEGs (when lossless JPEG uses Huffman coding) for reduced grayscale images with fewer than 7 bits/pixel.
Abstract: BACIC is a new method of lossless bi-level image compression introduced to replace JBIG and G3, the current standards for bi-level and facsimile image compression. This paper applies the BACIC (block arithmetic coding for image compression) algorithm to reduced grayscale and full grayscale image compression. BACIC's compressed files are slightly smaller than JBIGs, twice as small as G3s, and at least thirty percent smaller than lossless-JPEGs (when lossless JPEG uses Huffman coding) for reduced grayscale images with fewer than 7 bits/pixel.
8 citations
••
02 Jul 2007TL;DR: The experimental results have demonstrated that the proposed scheme outperforms the existing steganalysis techniques in attacking modern JPEG steganographic schemes-F5, Outguess, MB1 and MB2.
Abstract: This paper presents a new steganalysis scheme to attack JPEG steganography. The 360 dimensional feature vectors sensitive to data embedding process are derived from multidirectional Markov models in the JPEG coefficients domain. The class-wise non-principal components analysis (CNPCA) is proposed to classify steganograpghy in the high-dimensional feature vector space. The experimental results have demonstrated that the proposed scheme outperforms the existing steganalysis techniques in attacking modern JPEG steganographic schemes-F5, Outguess, MB1 and MB2.
8 citations
••
01 Dec 2012TL;DR: In this paper, the authors present the implementation of the JPEG compression on a field programmable gate array as the data are streamed from the camera, and the goal was to minimize the logic resources of the FPGA and the latency at each stage of compression.
Abstract: This paper presents the implementation of the JPEG compression on a field programmable gate array as the data are streamed from the camera. The goal was to minimise the logic resources of the FPGA and the latency at each stage of compression. The modules of these architectures are fully pipelined to enable continuous operation on streamed data. The designed architectures are detailed in this paper and they were described in Handel-C. The compliance of each JPEG module was validated using MATLAB. The resulting JPEG compressor has a latency of 8 rows of image readout plus 154 clock cycles.
7 citations
••
02 Nov 1997TL;DR: A distortion-computation function D(C) is defined as the minimum expected distortion in computing some quantity-using an algorithm from a predefined set of algorithms-while using no more than C computational units.
Abstract: A distortion-computation function D(C) is defined as the minimum expected distortion in computing some quantity-using an algorithm from a predefined set of algorithms-while using no more than C computational units. When the computational problem is to encode at rate R, this gives slices of a computation-rate-distortion surface. This framework is used in the analysis of a family of JPEG coders that use output-pruned DCT calculations in place of some full DCT calculations. For encoding the Lena image at 0.5 bits/pixel, this yields a 30% reduction in complexity while lowering the PSNR by only 0.4 dB. The decoding complexity can be similarly reduced.
7 citations