scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 2008"


Journal ArticleDOI
TL;DR: A method for the detection of double JPEG compression and a maximum-likelihood estimator of the primary quality factor are presented, essential for construction of accurate targeted and blind steganalysis methods for JPEG images.
Abstract: This paper presents a method for the detection of double JPEG compression and a maximum-likelihood estimator of the primary quality factor. These methods are essential for construction of accurate targeted and blind steganalysis methods for JPEG images. The proposed methods use support vector machine classifiers with feature vectors formed by histograms of low-frequency discrete cosine transformation coefficients. The performance of the algorithms is compared to selected prior art.

284 citations


Proceedings ArticleDOI
18 May 2008
TL;DR: An effective Markov process (MP) based JPEG steganalysis scheme, which utilizes both the intrablock and interblock correlations among JPEG coefficients, is presented.
Abstract: JPEG image steganalysis has attracted increasing attention recently. In this paper, we present an effective Markov process (MP) based JPEG steganalysis scheme, which utilizes both the intrablock and interblock correlations among JPEG coefficients. We compute transition probability matrix for each difference JPEG 2-D array to utilize the intrablock correlation, and "averaged" transition probability matrices for those difference mode 2-D arrays to utilize the interblock correlation. All the elements of these matrices are used as features for steganalysis. Experimental works over an image database of 7,560 JPEG images have demonstrated that this new approach has greatly improved JPEG steganalysis capability and outperforms the prior arts.

248 citations


Proceedings ArticleDOI
05 Nov 2008
TL;DR: Using the probabilities of the first digits of quantized DCT (discrete cosine transform) coefficients from individual AC (alternate current) modes to detect doubly compressed JPEG images and combining the MBFDF with a multi-class classification strategy can be exploited to identify the quality factor in the primary JPEG compression.
Abstract: In this paper, we utilize the probabilities of the first digits of quantized DCT (discrete cosine transform) coefficients from individual AC (alternate current) modes to detect doubly compressed JPEG images. Our proposed features, named by mode based first digit features (MBFDF), have been shown to outperform all previous methods on discriminating doubly compressed JPEG images from singly compressed JPEG images. Furthermore, combining the MBFDF with a multi-class classification strategy can be exploited to identify the quality factor in the primary JPEG compression, thus successfully revealing the double JPEG compression history of a given JPEG image.

181 citations


Proceedings ArticleDOI
TL;DR: A set of color spaces that allow reversible mapping between red-green-blue and luma-chroma representations in integer arithmetic can improve coding gain by over 0.5 dB with respect to the popular YCrCb transform, while achieving much lower computational complexity.
Abstract: This paper reviews a set of color spaces that allow reversible mapping between red-green-blue and luma-chroma representations in integer arithmetic. The YCoCg transform and its reversible form YCoCg-R can improve coding gain by over 0.5 dB with respect to the popular YCrCb transform, while achieving much lower computational complexity. We also present extensions of the YCoCg transform for four-channel CMYK pixel data. Thanks to their reversibility under integer arithmetic, these transforms are useful for both lossy and lossless compression. Versions of these transforms are used in the HD Photo image coding technology (which is the basis for the upcoming JPEG XR standard) and in recent editions of the H.264/MPEG-4 AVC video coding standard. Keywords: Image coding, color transforms, lossless coding, YCoCg, JPEG, JPEG XR, HD Photo. 1. INTRODUCTION In color image compression, usually the input image has three color values per pixel: red, green, and blue (RGB). Independent compression of each of the R, G, and B color planes is possible (and explicitly allowed in standards such as JPEG 2000

114 citations


Proceedings ArticleDOI
01 Dec 2008
TL;DR: Experiments have demonstrated that the proposed machine learning based scheme to distinguish between double and single JPEG compressed images has outperformed the prior arts.
Abstract: Double JPEG compression detection is of significance in digital forensics. We propose an effective machine learning based scheme to distinguish between double and single JPEG compressed images. Firstly, difference JPEG 2D arrays, i.e., the difference between the magnitude of JPEG coefficient 2D array of a given JPEG image and its shifted versions along various directions, are used to enhance double JPEG compression artifacts. Markov random process is then applied to modeling difference 2-D arrays so as to utilize the second-order statistics. In addition, a thresholding technique is used to reduce the size of the transition probability matrices, which characterize the Markov random processes. All elements of these matrices are collected as features for double JPEG compression detection. The support vector machine is employed as the classifier. Experiments have demonstrated that our proposed scheme has outperformed the prior arts.

103 citations


Proceedings ArticleDOI
12 May 2008
TL;DR: The shifted double JPEG compression (SD-JPEG) is formulated as a noisy convolutive mixing model mostly studied in blind source separation (BSS), and in noise free condition, the model can be solved by directly applying the independent component analysis (ICA) method with minor constraint to the contents of natural images.
Abstract: The artifacts by JPEG recompression have been demonstrated to be useful in passive image authentication. In this paper, we focus on the shifted double JPEG problem, aiming at identifying if a given JPEG image has ever been compressed twice with inconsistent block segmentation. We formulated the shifted double JPEG compression (SD-JPEG) as a noisy convolutive mixing model mostly studied in blind source separation (BSS). In noise free condition, the model can be solved by directly applying the independent component analysis (ICA) method with minor constraint to the contents of natural images. In order to achieve robust identification in noisy condition, the asymmetry of the independent value map (IVM) is exploited to obtain a normalized criteria of the independency. We generate a total of 13 features to fully represent the asymmetric characteristic of the independent value map and then feed to a support vector machine (SVM) classifier. Experiment results on a set of 1000 images, with various parameter settings, demonstrated the effectiveness of our method.

67 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed steganographic method has superior performance both in capacity and security, and is practical for the application of secret communication.

66 citations


Journal ArticleDOI
TL;DR: The proposed compression algorithm is based on JPEG 2000 and provides better near-lossless compression performance than 3D-CALIC and, in some cases, better than JPEG 2000.
Abstract: We propose a compression algorithm for hyperspectral images featuring both lossy and near-lossless compression. The algorithm is based on JPEG 2000 and provides better near-lossless compression performance than 3D-CALIC. We also show that its effect on the results of selected applications is negligible and, in some cases, better than JPEG 2000.

50 citations


Proceedings ArticleDOI
TL;DR: An overview of the key ideas behind the transform design in JPEG XR is provided, and how the transform is constructed from simple building blocks is described.
Abstract: JPEG XR is a draft international standard undergoing standardization within the JPEG committee, based on a Microsoft technology known as HD Photo. One of the key innovations in the draft JPEG XR standard is its integer-reversible hierarchical lapped transform. The transform can provide both bit-exact lossless and lossy compression in the same signal flow path. The transform requires only a small memory footprint while providing the compression benefits of a larger block transform. The hierarchical nature of the transform naturally provides three levels of multi-resolution signal representation. Its small dynamic range expansion, use of only integer arithmetic and its amenability to parallelized implementation lead to reduced computational complexity. This paper provides an overview of the key ideas behind the transform design in JPEG XR, and describes how the transform is constructed from simple building blocks.

47 citations


Proceedings ArticleDOI
01 Dec 2008
TL;DR: In this article, the authors proposed an improved method of the reversible data hiding for JPEG images proposed by Xuan et al. They found that the blocks, located in the noisy part of the image, are not suitable for embedding data and proposed a method to judge whether a block consisted of 8 times 8 DCT coefficients is located in smooth part of image by using the DC coefficient of neighboring blocks.
Abstract: This paper presents an improved method of the reversible data hiding for JPEG images proposed by Xuan, et al. The conventional method embeds data into the JPEG quantized 8 times 8 block DCT coefficients. In this method, we found that the blocks, located in the noisy part of the image, are not suitable for embedding data. The proposed method can judge whether a block consisted of 8 times 8 DCT coefficients is located in smooth part of image by using the DC coefficient of neighboring blocks. Our method can avoid the noisy part for embedding. It results in a better performance in terms of capacity-distortion behavior.

45 citations


Proceedings ArticleDOI
05 Nov 2008
TL;DR: This paper proposes a new approach to analyse the blocking periodicity by developing a linearly dependency model of pixel differences, constructing a probability map of each pixelpsilas belonging to this model, and finally extracting a peak window from the Fourier spectrum of the probability map.
Abstract: Since JPEG image format has been a popularly used image compression standard, tampering detection in JPEG images now plays an important role. The artifacts introduced by lossy JPEG compression can be seen as an inherent signature for compressed images. In this paper, we propose a new approach to analyse the blocking periodicity by, 1) developing a linearly dependency model of pixel differences, 2) constructing a probability map of each pixelpsilas belonging to this model, and 3) finally extracting a peak window from the Fourier spectrum of the probability map. We will show that, for single and double compressed images, their peakspsila energy distribution behave very differently. We exploit this property and derive statistic features from peak windows to classify whether an image has been tampered by cropping and recompression. Experimental results demonstrate the validity of the proposed approach.

Journal ArticleDOI
29 Oct 2008
TL;DR: Experiments show that the proposed method outperforms the other methods in terms of capacity and security, and theoretical analysis to the histogram characteristics after steganography proves that PM1 used in JPEG images preserves the first-order statistical properties.
Abstract: Plus minus 1 (PM1) is an improved method to least significant bits (LSB)-based steganography techniques, which not only foils typical attacks against LSB-based techniques, but also provides high capacity. But how to apply it to JPEG images does not appear in literatures. In this paper, PM1 steganography in JPEG images using genetic algorithm (GA) is proposed, in which the GA is used to optimize the performance, such as minimizing blockiness. Theoretical analysis to the histogram characteristics after steganography is discussed in details, which proves that PM1 used in JPEG images preserves the first-order statistical properties. Experiments show that the proposed method outperforms the other methods in terms of capacity and security.

Journal ArticleDOI
TL;DR: This is the first time that a JPEG-LS implementation offers such a high-speed encoding, and the experimental results show that encoding is performed as expected in high- speed, being able to serve real-time applications.
Abstract: A new design approach to create an efficient high-performance JPEG-LS encoder is proposed in this paper. The proposed implementation compresses the image data with the lossless mode of JPEG-LS. When the acquisition of precious content (image) is specified to occur in real-time, then lossless compression is essential. Lossless compression is important to critical applications, such as the acquisition of medical images and transmission of high-definition high-resolution images from space (satellite). The contribution of the paper is to introduce an efficient pipelined JPEG-LS encoder, which requires significantly lower encoding time than any other available JPEG-LS hardware or software implementation. The experimental results show that encoding is performed as expected in high-speed, being able to serve real-time applications. This is the first time that a JPEG-LS implementation offers such a high-speed encoding.

Proceedings ArticleDOI
04 Mar 2008
TL;DR: The experiment result shows that the proposed approach can provide a higher information- hiding capacity than Jpeg-Jsteg and Chang et al. methods based on the conventional blocks of 8times8 pixels.
Abstract: The two most important aspects of any image-based steganographic system are the quality of the stego-image and the capacity of the cover image. This paper proposes a novel and high capacity steganographic approach based on Discrete Cosine Transformation (DCT) and JPEG compression. JPEG technique divides the input image into non-overlapping blocks of 8times8 pixels and uses the DCT transformation. However, our proposed method divides the cover image into non- overlapping blocks of 16times16 pixels. For each quantized DCT block, the least two-significant bits (2-LSBs) of each middle frequency coefficient are modified to embed two secret bits. Our aim is to investigate the data hiding efficiency using larger blocks for JPEG compression. Our experiment result shows that the proposed approach can provide a higher information- hiding capacity than Jpeg-Jsteg and Chang et al. methods based on the conventional blocks of 8times8 pixels. Furthermore, the produced stego-images are almost identical to the original cover images.

Proceedings ArticleDOI
TL;DR: In this article, the authors present methods for detection of double-compression in JPEG images and for estimation of the primary quantization matrix, which is lost during recompression.
Abstract: A JPEG image is double-compressed if it underwent JPEG compression twice, each time with a different quantization matrix but with the same 8 × 8 grid. Some popular steganographic algorithms (Jsteg, F5, OutGuess) naturally produce such double-compressed stego images. Because double-compression may signficantly change the statistics of DCT coefficients, it negatively influences the accuracy of some steganalysis methods developed under the assumption that the stego image was only single-compressed. This paper presents methods for detection of double-compression in JPEGs and for estimation of the primary quantization matrix, which is lost during recompression. The proposed methods are essential for construction of accurate targeted and blind steganalysis methods for JPEG images, especially those based on calibration. Both methods rely on support vector machine classifiers with feature vectors formed by histograms of low-frequency DCT coefficients.

Patent
Andrew V. Kadatch1
10 Mar 2008
TL;DR: In this paper, a lossless pixel palettization scheme was proposed to locally compress portions of at least a two-dimensional image, allowing for efficient data transfers without loss of image information.
Abstract: The present invention leverages a lossless pixel palettization scheme to locally compress portions of at least a two-dimensional image. This provides a lossless compression means with a compression ratio comparable with lossy compression means, allowing for efficient data transfers without loss of image information. By utilizing locally-adaptive palettization, two-dimensional pixel information can be exploited to increase compression performance. In one instance of the present invention, a locally-adaptive, lossless palettization scheme is utilized in conjunction with a one-dimensional compression scheme to yield a further increase in compression ratio. This allows for the exploitation of two-dimensional data information along with the further compression of information reduced to one dimension.

Journal ArticleDOI
TL;DR: According to the simulation results, the throughput of the proposed design can encode 44.2 M samples/sec, which can be used for digital photography applications to achieve low computation, low storage, and high dynamical range features.
Abstract: To satisfy the high quality image compression requirement, the new JPEG XR compression standard is introduced. The analysis and architecture design with VLSI architecture of JPEG XR encoder are proposed in this paper which can encode 4:4:4 1920 times 1080 high definition photo in smooth. According to the simulation results, the throughput of the proposed design can encode 44.2 M samples/sec. This design can be used for digital photography applications to achieve low computation, low storage, and high dynamical range features.


Proceedings ArticleDOI
12 Dec 2008
TL;DR: Objective measurements performed by the author indicate that the modified encoder, while staying backwards compatible to the current standard proposition, improves visual performance significantly, and the performance of themodified encoder is similar to JPEG.
Abstract: Microsoft's recently proposed new image compression codec HDPhoto is currently undergoing ISO standardization as JPEG-XR. Even though performance measurements carried out by the JPEG committee indicated that the PSNR performance of HDPhoto is competitive, the visual performance of HDPhoto showed notable deficits, both in subjective and objective tests. This paper introduces various techniques that improve the visual performance of HDPhoto without leaving the current codestream definition. Objective measurements performed by the author indicate that the modified encoder, while staying backwards compatible to the current standard proposition, improves visual performance significantly, and the performance of the modified encoder is similar to JPEG.

Proceedings Article
01 Dec 2008
TL;DR: This paper proposes a kind of transcoding scheme which compresses existing JPEG files without any loss of quality and results indicate that the additional reductions of coding rates obtained by the proposed scheme are 18-28% for monochrome JPEG images.
Abstract: This paper proposes a kind of transcoding scheme which compresses existing JPEG files without any loss of quality. In this scheme, H.264-like block-adaptive intra prediction is employed to exploit inter-block correlations of quantized DCT coefficients stored in the JPEG file. This prediction is performed in spatial domain of each block composed of 8×8 pels, but the corresponding prediction residuals are calculated in DCT domain to ensure lossless reconstruction of the original coefficients. Moreover, block-based classification is carried out to allow accurate modeling of probability density functions (PDFs) of the prediction residuals. A multisymbol arithmetic coder along with the PDF model is used for entropy coding of the prediction residual of each DCT coefficient. Simulation results indicate that the additional reductions of coding rates obtained by the proposed scheme are 18–28% for monochrome JPEG images.

Proceedings ArticleDOI
12 Dec 2008
TL;DR: An image compression scheme is proposed, utilising wavelet- based image segmentation and texture analysis, and patch- based texture synthesis, and this has been incorporated into a JPEG framework.
Abstract: An image compression scheme is proposed, utilising wavelet- based image segmentation and texture analysis, and patch- based texture synthesis. This has been incorporated into a JPEG framework. Homogeneous textured regions are identified and removed prior to transform coding. These regions are then replaced at the decoder by synthesis from marked samples, and colour matched to ensure similarity to the original. Experimental results on natural images show bitrate savings of over 18% compared with JPEG for little change in measured visual quality.

Journal ArticleDOI
TL;DR: A novel filter design framework based on the Daubechies 9/7 filter, which employs chaos evolution programming (CEP) to optimize the wavelet filter for both the universal images and each specific image, respectively is proposed.

Journal Article
TL;DR: An adaptive algorithm with adjustable compression ratio by parameter to meet the requests of different enviroments is raised and the experimental result shows that the improved algorithm is more efficient than the tradional one in the process of remote transmission.
Abstract: The traditional lossless compression algorithms are aim for higher compress ratio,but they can not meet the requirement of the network environment.This paper raises an adaptive algorithm with adjustable compression ratio by parameter to meet the requests of different enviroments.The experimental result shows that the improved algorithm is more efficient than the tradional one in the process of remote transmission.

Proceedings ArticleDOI
23 Apr 2008
TL;DR: Simulation results show the proposed chaotic watermarking scheme for authentication of popular JPEG images can directly localize the tampers happened on the watermarked JPEG images, and is very fit for the integrity authentication of the electronic image in Internet.
Abstract: -With developing of computer networks and digital techniques, the electronic image are easily created, edited, reproduced and distributed. Unfortunately, illegal copy and malicious tamper are also facilitated. At present, fragile watermark is researched greatly to authenticate the veracity and integrity of electronic contents. In this paper, a chaotic watermarking scheme for authentication of popular JPEG images is proposed. The quantized DCT ( Discrete Cosine Transform ) coefficients after entropy decoding are mapped to the initial values of the chaotic system, then the generated watermark information by chaotic iteration is embedded into JPEG compressed domain. Thanks to the high sensitivity on initial values of the chaotic mapping, the very accurate localization is realized for the malicious tampers to JPEG images. Because we directly modify the DCT coefficients after quantization for embedding the watermark, the proposed method can prevent the invalidation of tamper detection by JPEG re-quantization. Furthermore, because the proposed scheme avoids a large of computation on full decoding and re-encoding process, the low complexity and a high extracting speed are obtained. Simulation results show the proposed scheme can directly localize the tampers happened on the watermarked JPEG images. Furthermore, the ability of tamper localization is very sensitivity. So the proposed scheme is very fit for the integrity authentication of the electronic image in Internet.

Journal ArticleDOI
TL;DR: Experimental results have demonstrated that the proposed watermark technique successfully survives JPEG 2000 compression, progressive transmission, and principal attacks.
Abstract: A new region of interest (ROI)-based watermarking method for JPEG 2000 is presented. The watermark is embedded into the host image based on the characteristics of the ROI to protect rights to the images. This scheme integrates the watermarking process with JPEG 2000 compression procedures. Experimental results have demonstrated that the proposed watermark technique successfully survives JPEG 2000 compression, progressive transmission, and principal attacks.

01 Oct 2008
TL;DR: This memo describes an RTP payload format for the ISO/IEC International Standard 15444-1 | ITU-T Rec.
Abstract: This memo describes an RTP payload format for the ISO/IEC International Standard 15444-1 | ITU-T Rec. T.800, better known as JPEG 2000. JPEG 2000 features are considered in the design of this payload format. JPEG 2000 is a truly scalable compression technology allowing applications to encode once and decode many different ways. The JPEG 2000 video stream is formed by extending from a single image to a series of JPEG 2000 images. [STANDARDS-TRACK]

Proceedings ArticleDOI
TL;DR: This paper explores several encoder-side techniques aimed at improving the compression performance of encoding for the draft JPEG XR standard, and discusses techniques for achieving better compression performance according to each metric.
Abstract: This paper explores several encoder-side techniques aimed at improving the compression performance of encoding for the draft JPEG XR standard. Though the syntax and decoding process are fixed by the standard, significant variation in encoder design and some variation in decoder design are possible. For a variety of selected quality metrics, the paper discusses techniques for achieving better compression performance according to each metric. As a basic reference encoder and decoder for the discussion and modifications, the publically available Microsoft HD Photo DPK (Device Porting Kit) 1.0, on which the draft JPEG XR standard was based, was used. The quality metrics considered include simple mathematical objective metrics (PSNR and L ’ ) as well as pseudo-perceptual me trics (single-scale and multi-scale MSSIM). Keywords: Image coding, JPEG, JPEG 2000, JPEG XR, HD Photo 1. INTRODUCTION Presently, the JPEG committee is in the process of standa rdizing a new image coding sp ecification known as JPEG XR

Journal ArticleDOI
TL;DR: Comparison of diagnostic performances of 2 different image compression methods indicates that image compression with typical compression algorithms at rates yielding storage sizes of around 50 kB is sufficient even for the challenging task of radiographic detection of non-cavitated carious approximal lesions.
Abstract: The study compared diagnostic performances of 2 different image compression methods: JPEG (discrete cosine transform; Joint Photographic Experts Group compression standard) versus JPEG2000 (discrete w

Proceedings ArticleDOI
01 Dec 2008
TL;DR: This paper presents a new method for encoding the multiwavelet decomposed images by defining coefficients suitable for SPIHT algorithm which gives better compression performance over the existing methods in many cases.
Abstract: Advances in wavelet transforms and quantization methods have produced algorithms capable of surpassing the existing image compression standards like the joint photographic experts group (JPEG) algorithm. The existing compression methods for JPEG standards are using DCT with arithmetic coding and DWT with Huffman coding. The DCT uses a single kernel where as wavelet offers more number of filters depends on the applications. The wavelet based set partitioning in hierarchical trees (SPIHT) algorithm gives better compression. For best performance in image compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry, but they cannot simultaneously possess all of these properties. The relatively new field of multiwavelets offer more design options and can combine all desirable transform features. But there are some limitations in using the SPIHT algorithm for multiwavelet coefficients. This paper presents a new method for encoding the multiwavelet decomposed images by defining coefficients suitable for SPIHT algorithm which gives better compression performance over the existing methods in many cases.

Proceedings ArticleDOI
20 Dec 2008
TL;DR: The statistical difference in the sub-band DWT (discrete wavelet transform) coefficient histograms between single and double JPEG 2000 compression is analyzed and a scheme to discriminate between them is devised.
Abstract: Double image compression might occur if the image has been tampered with or embedded into secret data. It is essential to detect double compression for image forensics and blind steganalysis. This paper analyzes the statistical difference in the sub-band DWT (discrete wavelet transform) coefficient histograms between single and double JPEG 2000 compression; devises a scheme to discriminate between them. The experiments demonstrate that the proposed approach can achieve an effective and accurate detection for double JPEG 2000 compression.