scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
01 Aug 2015
TL;DR: From the experiment, the ELA showed reliability with JPEG compression, image splicing and image retouching forgery, and the Error Level Analysis (ELA) technique was evaluated with different types of image tampering.
Abstract: The advancement in digital image tampering has encouraged studies in the image forensics fields. The image tampering can be found over various image formats such as Joint Photographic Experts Group (JPEG). JPEG is the most common format that supported by devices and applications. Therefore, researchers have been studying the implementation of JPEG algorithm in the image forensics. In this paper, the Error Level Analysis (ELA) technique was evaluated with different types of image tampering, including JPEG compression, image splicing, copy-move and image retouching. From the experiment, the ELA showed reliability with JPEG compression, image splicing and image retouching forgery.

15 citations

Proceedings ArticleDOI
12 May 1998
TL;DR: A rate-distortion optimized JPEG compliant progressive encoder is presented that produces a sequence of bit scans, ordered in terms of decreasing importance, and can achieve precise rate/distortion control.
Abstract: Among the different modes of operations allowed in the current JPEG standard, the sequential and progressive modes are the most widely used. While the sequential JPEG mode yields essentially the same level of compression performance for most encoder implementations, the performance of progressive JPEG depends highly upon the designed encoder structure. This is due to the flexibility the standard leaves open in designing progressive JPEG encoders. In this paper, a rate-distortion optimized JPEG compliant progressive encoder is presented that produces a sequence of bit scans, ordered in terms of decreasing importance. Our encoder outperforms a baseline sequential JPEG encoder in terms of compression, significantly at medium bit rates, and substantially at low and high bit rates. Moreover, unlike baseline JPEG encoders, ours can achieve precise rate/distortion control. Good rate-distortion performance at low bit rates and precise rate control, provided by our JPEG compliant progressive encoder, are two highly desired features currently sought for JPEG-2000.

15 citations

Proceedings ArticleDOI
01 Nov 2013
TL;DR: Under an infinite variance assumption, the expression of the optimal detector is derived together with a practical approximation formula based on multidimensional Fourier series, which outperforms existing state-of-art detectors for nonaligned double JPEG compression.
Abstract: In this paper, we investigate the problem of deciding whether a multidimensional signal has been quantized according to a given lattice or not Under an infinite variance assumption, we derive the expression of the optimal detector, together with a practical approximation formula based on multidimensional Fourier series As a forensic case study, the proposed detector is applied to the detection of nonaligned double JPEG compression Results on both synthetic signals and real JPEG images show interesting properties of the proposed detector Namely, the detector outperforms existing state-of-art detectors for nonaligned double JPEG compression The application of the proposed scheme to other forensic problems seems a natural extension of this work

15 citations

Journal ArticleDOI
TL;DR: This paper proposes straightforward extensions to the JPEG2000 image compression standard which allow for the efficient coding of floating-point data and test results show that the proposed lossless methods have raw compression performance that is competitive with, and sometime exceeds, current state-of-the-art methods.
Abstract: Many scientific applications require that image data be stored in floating-point format due to the large dynamic range of the data. These applications pose a problem if the data needs to be compressed since modern image compression standards, such as JPEG2000, are only defined to operate on fixed-point or integer data. This paper proposes straightforward extensions to the JPEG2000 image compression standard which allow for the efficient coding of floating-point data. These extensions maintain desirable properties of JPEG2000, such as lossless and rate distortion optimal lossy decompression from the same coded bit stream, scalable embedded bit streams, error resilience, and implementation on low-memory hardware. Although the proposed methods can be used for both lossy and lossless compression, the discussion in this paper focuses on, and the test results are limited to, the lossless case. Test results on real image data show that the proposed lossless methods have raw compression performance that is competitive with, and sometime exceeds, current state-of-the-art methods.

15 citations

Proceedings ArticleDOI
01 Sep 2012
TL;DR: A new texture-based compression approach that relies on new texture similarity metrics and is able to exploit texture redundancies for significant compression gains without loss of visual quality, even though there may visible differences with the original image (structurally lossless).
Abstract: We propose a new texture-based compression approach that relies on new texture similarity metrics and is able to exploit texture redundancies for significant compression gains without loss of visual quality, even though there may visible differences with the original image (structurally lossless). Existing techniques rely on point-by-point metrics that cannot account for the stochastic and repetitive nature of textures. The main idea is to encode selected blocks of textures - as well as smooth blocks and blocks containing boundaries between smooth and/or textured regions - by pointing to previously occurring (already encoded) blocks of similar textures, blocks that are not encoded in this way, are encoded by a baseline method, such as JPEG. Experimental results with natural images demonstrate the advantages of the proposed approach.

15 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815