scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Journal ArticleDOI
TL;DR: A new codec design method based on JPEG for face images and its application to face recognition is presented and shows better performance than JPEG codec.
Abstract: This paper proposes a new codec design method based on JPEG for face images and presents its application to face recognition. Quantization table is designed using the R-D optimization for the Yale face database. In order to use in the embedded systems, fast codec design is also considered. The proposed codec achieves better compression rates than JPEG codec for face images. In face recognition experiments using the linear discriminant analysis (LDA), the proposed codec shows better performance than JPEG codec.

32 citations

Proceedings ArticleDOI
29 Nov 2011
TL;DR: A forensic algorithm to discriminate between original and forged regions in JPEG images, under the hypothesis that the tampered image presents a non-aligned double JPEG compression (NA-JPEG).
Abstract: In this paper, we present a forensic algorithm to discriminate between original and forged regions in JPEG images, under the hypothesis that the tampered image presents a non-aligned double JPEG compression (NA-JPEG). Unlike previous approaches, the proposed algorithm does not need to manually select a suspect region to test the presence or the absence of NA-JPEG artifacts. Based on a new statistical model, the probability for each 8 × 8 DCT block to be forged is automatically derived. Experimental results, considering different forensic scenarios, demonstrate the validity of the proposed approach.

31 citations

Book ChapterDOI
28 Aug 2012
TL;DR: A method that automatically and efficiently discriminates single- and double-compressed regions based on the JPEG ghost principle is proposed and experiments show that the detection results are highly competitive with state-of-the-art methods.
Abstract: We present a method for automating the detection of the so-called JPEG ghost s. JPEG ghost s can be used for discriminating single- and double JPEG compression, which is a common cue for image manipulation detection. The JPEG ghost scheme is particularly well-suited for non-technical experts, but the manual search for such ghost s can be both tedious and error-prone. In this paper, we propose a method that automatically and efficiently discriminates single- and double-compressed regions based on the JPEG ghost principle. Experiments show that the detection results are highly competitive with state-of-the-art methods, for both, aligned and shifted JPEG grids in double-JPEG compression.

31 citations

Journal ArticleDOI
TL;DR: This paper describes how to compress floating-point coordinates using predictive coding in a completely lossless manner and reports compression results using the popular parallelogram predictor, whose approach works with any prediction scheme.
Abstract: The geometric data sets found in scientific and industrial applications are often very detailed. Storing them using standard uncompressed formats results in large files that are expensive to store and slow to load and transmit. Many efficient mesh compression techniques have been proposed, but scientists and engineers often refrain from using them because they modify the mesh data. While connectivity is encoded in a lossless manner, the floating-point coordinates associated with the vertices are quantized onto a uniform integer grid for efficient predictive compression. Although a fine enough grid can usually represent the data with sufficient precision, the original floating-point values will change, regardless of grid resolution. In this paper we describe how to compress floating-point coordinates using predictive coding in a completely lossless manner. The initial quantization step is omitted and predictions are calculated in floating-point. The predicted and the actual floating-point values are then broken up into sign, exponent, and mantissa and their corrections are compressed separately with context-based arithmetic coding. As the quality of the predictions varies with the exponent, we use the exponent to switch between different arithmetic contexts. Although we report compression results using the popular parallelogram predictor, our approach works with any prediction scheme. The achieved bit-rates for lossless floating-point compression nicely complement those resulting from uniformly quantizing with different precisions.

31 citations

Proceedings ArticleDOI
08 Aug 2000
TL;DR: Concepts that optimize image compression ratio by utilizing the information about a signal's properties and their uses are introduced to achieve further gains in image compression.
Abstract: Introduces concepts that optimize image compression ratio by utilizing the information about a signal's properties and their uses. This additional information about the image is used to achieve further gains in image compression. The techniques developed in this work are on the ubiquitous JPEG still image compression standard [IS094] for compression of continuous tone grayscale and color images. This paper is based on a region based variable quantization JPEG software codec that was developed tested and compared with other image compression techniques. The application, named JPEGTool, has a graphical user interface (GUI) and runs under Microsoft Windows (R) 95. This paper discusses briefly the standard JPEG implementation and software extensions to the standard. region selection techniques and algorithms that complement variable quantization techniques are presented in addition to a brief discussion on the theory and implementation of variable quantization schemes. The paper includes a presentation of generalized criteria for image compression performance and specific results obtained with JPEGTool.

31 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815