scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
03 Aug 1994
TL;DR: The first stage of a two stage lossless data compression algorithm consists of a lossless adaptive predictor and the second stage employs arithmetic coding.
Abstract: This paper describes the first stage of a two stage lossless data compression algorithm. The first stage consists of a lossless adaptive predictor. The term lossless implies that the original data can be recovered exactly. The second stage employs arithmetic coding. Results are presented for a seismic data base.

30 citations

Proceedings ArticleDOI
30 Oct 2009
TL;DR: The impact of using different lossless compression algorithms on the compression ratios and timings when processing various biometric sample data is investigated.
Abstract: The impact of using different lossless compression algorithms when compressing biometric iris sample data from several public iris databases is investigated. In particular, we relate the application of dedicated lossless image codecs like lossless JPEG, JPEG-LS, PNG, and GIF, lossless variants of lossy codecs like JPEG2000, JPEG XR, and SPIHT, and a few general purpose compression schemes to rectilinear iris imagery. The results are discussed in the light of the recent ISO/IEC FDIS 19794-6 and ANSI/NIST-ITL 1-2011 standards and the IREX recommendations.

30 citations

Proceedings ArticleDOI
28 Mar 2000
TL;DR: A simple parallel algorithm for decoding a Huffman encoded file is presented, exploiting the tendency of Huffman codes to resynchronize quickly in most cases.
Abstract: A simple parallel algorithm for decoding a Huffman encoded file is presented, exploiting the tendency of Huffman codes to resynchronize quickly in most cases. An extention to JPEG decoding is mentioned.

29 citations

Patent
Andrew V. Kadatch1
10 Mar 2008
TL;DR: In this paper, a lossless pixel palettization scheme was proposed to locally compress portions of at least a two-dimensional image, allowing for efficient data transfers without loss of image information.
Abstract: The present invention leverages a lossless pixel palettization scheme to locally compress portions of at least a two-dimensional image. This provides a lossless compression means with a compression ratio comparable with lossy compression means, allowing for efficient data transfers without loss of image information. By utilizing locally-adaptive palettization, two-dimensional pixel information can be exploited to increase compression performance. In one instance of the present invention, a locally-adaptive, lossless palettization scheme is utilized in conjunction with a one-dimensional compression scheme to yield a further increase in compression ratio. This allows for the exploitation of two-dimensional data information along with the further compression of information reduced to one dimension.

29 citations

Journal ArticleDOI
TL;DR: This paper develops a novel forensic technique that is able to detect chains of operators applied to an image and derives an accurate mathematical framework to fully characterize the probabilistic distributions of the discrete cosine transform coefficients of the quantized and filtered image.
Abstract: Powerful image editing software is nowadays capable of creating sophisticated and visually compelling fake photographs, thus posing serious issues to the trustworthiness of digital contents as a true representation of reality. Digital image forensics has emerged to help regain some trust in digital images by providing valuable aids in learning the history of an image. Unfortunately, in real scenarios, its application is limited, since multiple processing operators are likely to be applied, which alters the characteristic footprints exploited by current forensic tools. In this paper, we develop a novel forensic technique that is able to detect chains of operators applied to an image. In particular, we study the combination of Joint Photographic Experts Group compression and full-frame linear filtering, and derive an accurate mathematical framework to fully characterize the probabilistic distributions of the discrete cosine transform (DCT) coefficients of the quantized and filtered image. We then exploit such knowledge to define a set of features from the DCT distribution and build an effective classifier able to jointly disclose the quality factor of the applied compression and the filter kernel. Extensive experimental analysis illustrates the efficiency and versatility of the proposed approach, which effectively overcomes the state-of-the-art.

29 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815