scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Posted Content
TL;DR: This paper presents a CNN solution by using raw DCT (discrete cosine transformation) coefficients from JPEG images as input, designed to reveal whether a JPEG format image has been doubly compressed.
Abstract: Detection of double JPEG compression is important to forensics analysis. A few methods were proposed based on convolutional neural networks (CNNs). These methods only accept inputs from pre-processed data, such as histogram features and/or decompressed images. In this paper, we present a CNN solution by using raw DCT (discrete cosine transformation) coefficients from JPEG images as input. Considering the DCT sub-band nature in JPEG, a multiple-branch CNN structure has been designed to reveal whether a JPEG format image has been doubly compressed. Comparing with previous methods, the proposed method provides end-to-end detection capability. Extensive experiments have been carried out to demonstrate the effectiveness of the proposed network.

29 citations

Patent
31 Aug 1995
TL;DR: In this paper, the authors proposed a two-pass approach that can compress an arbitrary image to a predetermined fixed size file, based on the average sum of the absolute value of quantized DCT coefficients per block.
Abstract: The present invention is a fully JPEG compliant two-pass approach that can compress an arbitrary image to a predetermined fixed size file. The compression coding device and method according to the present invention estimates an activity metric based on the average sum of the absolute value of the quantized DCT coefficients per block. Given the activity metric, a mathematical model relating the image activity to the JPEG Q-factor for a given value of the target compression ratio provides an estimated Q-factor value that yields the design target ratio. This mathematical model is developed during a calibration phase which is executed once off line for a given image capturing device. The fact that our activity metric is based on the quantized DCT coefficients allows for an efficient implementation of the presented coding method in either speed or memory bound systems.

29 citations

Proceedings ArticleDOI
17 May 2004
TL;DR: The paper describes the basic elements of the codec, points out envisaged applications, and gives an outline of the standardization process.
Abstract: Lossless coding is to become the latest extension of the MPEG-4 audio standard. In response to a call for proposals, many companies have submitted lossless audio codecs for evaluation. The codec of the Technical University of Berlin was chosen as reference model for MPEG-4 audio lossless coding (ALS), attaining working draft status in July 2003. The encoder is based on linear prediction, which enables high compression even with moderate complexity, while the corresponding decoder is straightforward. The paper describes the basic elements of the codec, points out envisaged applications, and gives an outline of the standardization process.

29 citations

Journal ArticleDOI
02 Oct 2006
TL;DR: In this article, blind classifiers are constructed for detecting steganography in JPEG images and assigning stego images to six popular JPEG embedding algorithms, using 23 calibrated DCT feature calculated from the luminance component.
Abstract: The goal of forensic steganalysis is to detect the presence of embedded data and to eventually extract the secret message. A necessary step towards extracting the data is determining the steganographic algorithm used to embed the data. In the paper, we construct blind classifiers capable of detecting steganography in JPEG images and assigning stego images to six popular JPEG embedding algorithms. The classifiers are support vector machines that use 23 calibrated DCT feature calculated from the luminance component.

29 citations

Journal ArticleDOI
TL;DR: It is found that the compression efficiency of the neural network based predictive techniques is significantly improved by using the error modeling schemes, and the bits per sample required for EEG compression with error modeling and entropy coding lie in the range of 2.92 to 6.62 which indicates a saving of 0.3 to 0.7 bits.
Abstract: Two-stage lossless data compression methods involving predictors and encoders are well known. This paper discusses the application of context based error modeling techniques for neural network predictors used for the compression of EEG signals. Error modeling improves the performance of a compression algorithm by removing the statistical redundancy that exists among the error signals after the prediction stage. In this paper experiments are carried out by using human EEG signals recorded under various physiological conditions to evaluate the effect of context based error modeling in the EEG compression. It is found that the compression efficiency of the neural network based predictive techniques is significantly improved by using the error modeling schemes. It is shown that the bits per sample required for EEG compression with error modeling and entropy coding lie in the range of 2.92 to 6.62 which indicates a saving of 0.3 to 0.7 bits compared to the compression scheme without error modeling.

29 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815