scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 2009"


Journal ArticleDOI
TL;DR: JPEG XR is the newest image coding standard from the JPEG committee and achieves high image quality, on par with JPEG 2000, while requiring low computational resources and storage capacity.
Abstract: JPEG XR is the newest image coding standard from the JPEG committee. It primarily targets the representation of continuous-tone still images such as photographic images and achieves high image quality, on par with JPEG 2000, while requiring low computational resources and storage capacity. Moreover, it effectively addresses the needs of emerging high dynamic range imagery applications by including support for a wide range of image representation formats.

163 citations


Journal ArticleDOI
TL;DR: A quantitative comparison between the energy costs associated with direct transmission of uncompressed images and sensor platform-based JPEG compression followed by transmission of the compressed image data is presented.
Abstract: One of the most important goals of current and future sensor networks is energy-efficient communication of images. This paper presents a quantitative comparison between the energy costs associated with 1) direct transmission of uncompressed images and 2) sensor platform-based JPEG compression followed by transmission of the compressed image data. JPEG compression computations are mapped onto various resource-constrained platforms using a design environment that allows computation using the minimum integer and fractional bit-widths needed in view of other approximations inherent in the compression process and choice of image quality parameters. Advanced applications of JPEG, such as region of interest coding and successive/progressive transmission, are also examined. Detailed experimental results examining the tradeoffs in processor resources, processing/transmission time, bandwidth utilization, image quality, and overall energy consumption are presented.

103 citations


Journal ArticleDOI
TL;DR: An iterative algorithm is presented to jointly optimize run-length coding, Huffman coding, and quantization table selection that results in a compressed bitstream completely compatible with existing JPEG and MPEG decoders, but is also computationally efficient.
Abstract: To maximize rate distortion performance while remaining faithful to the JPEG syntax, the joint optimization of the Huffman tables, quantization step sizes, and DCT indices of a JPEG encoder is investigated. Given Huffman tables and quantization step sizes, an efficient graph-based algorithm is first proposed to find the optimal DCT indices in the form of run-size pairs. Based on this graph-based algorithm, an iterative algorithm is then presented to jointly optimize run-length coding, Huffman coding, and quantization table selection. The proposed iterative algorithm not only results in a compressed bitstream completely compatible with existing JPEG and MPEG decoders, but is also computationally efficient. Furthermore, when tested over standard test images, it achieves the best JPEG compression results, to the extent that its own JPEG compression performance even exceeds the quoted PSNR results of some state-of-the-art wavelet-based image coders such as Shapiro's embedded zerotree wavelet algorithm at the common bit rates under comparison. Both the graph-based algorithm and the iterative algorithm can be applied to application areas such as web image acceleration, digital camera image compression, MPEG frame optimization, and transcoding, etc.

91 citations


Proceedings ArticleDOI
01 Dec 2009
TL;DR: This paper investigates the important case of resampling detection in re-compressed JPEG images and shows how blocking artifacts of the previous compression step can help to increase the otherwise drastically reduced detection performance in JPEG compressed images.
Abstract: Resampling detection has become a standard tool in digital image forensics. This paper investigates the important case of resampling detection in re-compressed JPEG images. We show how blocking artifacts of the previous compression step can help to increase the otherwise drastically reduced detection performance in JPEG compressed images. We give a formulation on how affine transformations of JPEG compressed images affect state-of-the-art resampling detectors and derive a new efficient detection variant, which better suits this relevant detection scenario. The principal appropriateness of using JPEG pre-compression artifacts for the detection of resampling in re-compressed images is backed with experimental evidence on a large image set and for a variety of different JPEG qualities.

86 citations


Journal ArticleDOI
TL;DR: A modified version of the embedded block coder with optimized truncation (EBCOT), tailored according to the characteristics of the data, encodes the residual data generated after prediction to provide resolution and quality scalability.
Abstract: We propose a novel symmetry-based technique for scalable lossless compression of 3D medical image data. The proposed method employs the 2D integer wavelet transform to decorrelate the data and an intraband prediction method to reduce the energy of the sub-bands by exploiting the anatomical symmetries typically present in structural medical images. A modified version of the embedded block coder with optimized truncation (EBCOT), tailored according to the characteristics of the data, encodes the residual data generated after prediction to provide resolution and quality scalability. Performance evaluations on a wide range of real 3D medical images show an average improvement of 15% in lossless compression ratios when compared to other state-of-the art lossless compression methods that also provide resolution and quality scalability including 3D-JPEG2000, JPEG2000, and H.264/AVC intra-coding.

86 citations


Book ChapterDOI
03 Sep 2009
TL;DR: It is demonstrated that it is even possible to beat the quality of the much more advanced JPEG 2000 standard when one uses subdivisions on rectangles and a number of additional optimisations.
Abstract: Although widely used standards such as JPEG and JPEG 2000 exist in the literature, lossy image compression is still a subject of ongoing research. Galic et al. (2008) have shown that compression based on edge-enhancing anisotropic diffusion can outperform JPEG for medium to high compression ratios when the interpolation points are chosen as vertices of an adaptive triangulation. In this paper we demonstrate that it is even possible to beat the quality of the much more advanced JPEG 2000 standard when one uses subdivisions on rectangles and a number of additional optimisations. They include improved entropy coding, brightness rescaling, diffusivity optimisation, and interpolation swapping. Experiments on classical test images are presented that illustrate the potential of our approach.

56 citations


Proceedings ArticleDOI
12 Sep 2009
TL;DR: In this paper, QR(Quick Response) bar code and image processing techniques are used to construct a nested steganography scheme and it is evident that the scheme is robust to JPEG attacks.
Abstract: In this paper, QR(Quick Response) bar code and image processing techniques are used to construct a nested steganography scheme. There is a lossless data embedded into a cover image. The lossless data is text that is first encoded by the QR barcode; its data does not have any distortion when comparing with the extracted data and original data. Since the extracted text is lossless, the error correction rate of QR encoding must be carefully designed. We found 25% error correction rate is suitable for our goal. After simulation, it is evident that our scheme is robust to JPEG attacks.

37 citations


Proceedings ArticleDOI
19 Oct 2009
TL;DR: A novel method of JPEG steganalysis is proposed based on an observation of bi-variate generalized Gaussian distribution in Discrete Cosine Transform (DCT) domain, neighboring joint density features on both intra-block and inter-block are extracted.
Abstract: Detection of information-hiding in JPEG images is actively delivered in steganalysis community due to the fact that JPEG is a widely used compression standard and several steganographic systems have been designed for covert communication in JPEG images. In this paper, we propose a novel method of JPEG steganalysis. Based on an observation of bi-variate generalized Gaussian distribution in Discrete Cosine Transform (DCT) domain, neighboring joint density features on both intra-block and inter-block are extracted. Support Vector Machines (SVMs) are applied for detection. Experimental results indicate that this new method prominently improves a current art of steganalysis in detecting several steganographic systems in JPEG images. Our study also shows that it is more accurate to evaluate the detection performance in terms of both image complexity and information hiding ratio.

33 citations


Journal ArticleDOI
TL;DR: A JPEG decompression algorithm which is designed to produce substantially higher quality images from the same standard JPEG encodings by incorporating a document image model into the decoding process which accounts for the wide variety of content in modern complex color documents.
Abstract: The JPEG standard is one of the most prevalent image compression schemes in use today. While JPEG was designed for use with natural images, it is also widely used for the encoding of raster documents. Unfortunately, JPEG's characteristic blocking and ringing artifacts can severely degrade the quality of text and graphics in complex documents. We propose a JPEG decompression algorithm which is designed to produce substantially higher quality images from the same standard JPEG encodings. The method works by incorporating a document image model into the decoding process which accounts for the wide variety of content in modern complex color documents. The method works by first segmenting the JPEG encoded document into regions corresponding to background, text, and picture content. The regions corresponding to text and background are then decoded using maximum a posteriori (MAP) estimation. Most importantly, the MAP reconstruction of the text regions uses a model which accounts for the spatial characteristics of text and graphics. Our experimental comparisons to the baseline JPEG decoding as well as to three other decoding schemes, demonstrate that our method substantially improves the quality of decoded images, both visually and as measured by PSNR.

30 citations


Proceedings ArticleDOI
23 Oct 2009
TL;DR: The study shows that the proposed method to detect resized JPEG images and spliced images, which are widely used in image forgery, is highly effective and related to both image complexity and resize scale factor.
Abstract: Today's ubiquitous digital media are easily tampered by, e.g., removing or adding objects from or into images without leaving any obvious clues. JPEG is a most widely used standard in digital images and it can be easily doctored. It is therefore necessary to have reliable methods to detect forgery in JPEG images for applications in law enforcement, forensics, etc. In this paper, based on the correlation of neighboring Discrete Cosine Transform (DCT) coefficients, we propose a method to detect resized JPEG images and spliced images, which are widely used in image forgery. In detail, the neighboring joint density features of the DCT coefficients are extracted; then Support Vector Machines (SVM) are applied to the features for detection. To improve the evaluation of JPEG resized detection, we utilize the shape parameter of generalized Gaussian distribution (GGD) of DCT coefficients to measure the image complexity.The study shows that our method is highly effective in detecting JPEG images resizing and splicing forgery. In the detection of resized JPEG image, the performance is related to both image complexity and resize scale factor. At the same scale factor, the detection performance in high image complexity is, as can be expected, lower than that in low image complexity.

30 citations


Proceedings ArticleDOI
30 Oct 2009
TL;DR: The impact of using different lossless compression algorithms on the compression ratios and timings when processing various biometric sample data is investigated.
Abstract: The impact of using different lossless compression algorithms when compressing biometric iris sample data from several public iris databases is investigated. In particular, we relate the application of dedicated lossless image codecs like lossless JPEG, JPEG-LS, PNG, and GIF, lossless variants of lossy codecs like JPEG2000, JPEG XR, and SPIHT, and a few general purpose compression schemes to rectilinear iris imagery. The results are discussed in the light of the recent ISO/IEC FDIS 19794-6 and ANSI/NIST-ITL 1-2011 standards and the IREX recommendations.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: A novel yet counter intuitive technique to “denoise” JPEG images by adding Gaussian noise to a resized and JPEG compressed image so that the periodicity due to JPEG compression is suppressed while that due to resizing is retained.
Abstract: A common problem affecting most image resizing detection algorithms is that they are susceptible to JPEG compression This is because JPEG introduces periodic artifacts, as it works on 8×8 blocks We propose a novel yet counter intuitive technique to “denoise” JPEG images by adding Gaussian noise We add a suitable amount of Gaussian noise to a resized and JPEG compressed image so that the periodicity due to JPEG compression is suppressed while that due to resizing is retained The controlled Gaussian noise addition works better than median filtering and weighted averaging based filtering for suppressing the JPEG induced periodicity

Journal ArticleDOI
01 Jul 2009
TL;DR: Two major contributions are presented that enhance previous work for compression of functional MRI (fMRI) data: a new multiframe motion compensation process that employs 4-D search, variable-size block matching, and bidirectional prediction, and a new context-based adaptive binary arithmetic coder designed for lossless compression of the residual and motion vector data.
Abstract: We recently proposed a method for lossless compression of 4-D medical images based on the advanced video coding standard (H.264/AVC). In this paper, we present two major contributions that enhance our previous work for compression of functional MRI (fMRI) data: (1) a new multiframe motion compensation process that employs 4-D search, variable-size block matching, and bidirectional prediction; and (2) a new context-based adaptive binary arithmetic coder designed for lossless compression of the residual and motion vector data. We validate our method on real fMRI sequences of various resolutions and compare the performance to two state-of-the-art methods: 4D-JPEG2000 and H.264/AVC. Quantitative results demonstrate that our proposed technique significantly outperforms current state of the art with an average compression ratio improvement of 13%.

Journal ArticleDOI
TL;DR: An improved calibration-based universal JPEG steganalysis, where the microscopic and macroscopic calibrations are combined to calibrate the local and global distribution of the quantized BDCT coefficients of the test image.
Abstract: For steganalysis of JPEG images, features derived in the embedding domain appear to achieve a preferable performance. However, with the existing JPEG steganography, the minor changes due to the hidden secret data are not easy to be explored directly from the quantized block DCT (BDCT) coefficients in that the energy of the carrier image is much larger than that of the hidden signal. In this paper, we present an improved calibration-based universal JPEG steganalysis, where the microscopic and macroscopic calibrations are combined to calibrate the local and global distribution of the quantized BDCT coefficients of the test image. All features in our method are generated from the difference signal between the quantized BDCT coefficients of the test image and its corresponding microscopic calibrated image, or calculated as the difference between the signal extracted from test image and its corresponding macroscopic calibrated image. The extracted features will be more effective for our classification. Moreover, through using the Markov empirical transition matrices, both magnitude and sign dependencies along row scanning and column scanning patterns existed in intra-block and inter-block quantized BDCT coefficients are employed in our method. Experimental results demonstrate that our proposed scheme outperforms the best effective JPEG steganalyzers having been presented.

Journal ArticleDOI
TL;DR: In this paper, the authors explored the conditions under which primary quantization coefficients can be identified, and hence can be used image source identification for matching a small range of potential source cameras to an image.
Abstract: The choice of Quantization Table in a JPEG image has previously been shown to be an effective discriminator of digital image cameras by manufacturer and model series. When a photograph is recompressed for transmission or storage, however, the image undergoes a secondary stage of quantization. It is possible, however to identify primary quantization artifacts in the image coefficients, provided that certain image and quantization conditions are met. This paper explores the conditions under which primary quantization coefficients can be identified, and hence can be used image source identification. Forensic applications include matching a small range of potential source cameras to an image.

01 Jan 2009
TL;DR: The process of JPEG image compression and the use of linear algebra as part of this process are outlined and each step of the process used by JPEG as a sample image is compressed is demonstrated.
Abstract: This paper will outline the process of JPEG image compression and the use of linear algebra as part of this process. It will introduce the reasons for image compression and clearly demonstrate each step of the process used by JPEG as a sample image is compressed.

Proceedings ArticleDOI
Xing Xie1, Qianqing Qin1
15 May 2009
TL;DR: A new lossless floating seismic data compression method based on differential predictor and adaptive context-based arithmetic coding (DPACAC) is presented, which can achieve higher compression ratios compared with other lossless algorithms.
Abstract: Modern seismic exploration produces vast amount of data that may be transferred between computers as well as to and from storage devices. Compression algorithms are then desirable to make seismic data processing more efficient in terms of storage and transmission bandwidth. Most current algorithms being used for floating-point seismic data compression are based on wavelet or LCT (local cosine transform) and may result in information loss. In this paper, a new lossless floating seismic data compression method based on differential predictor and adaptive context-based arithmetic coding (DPACAC) is presented, which can achieve higher compression ratios compared with other lossless algorithms

Proceedings ArticleDOI
14 Mar 2009
TL;DR: A comparative study of JPEG and SPIHT compression algorithms is presented and it is shown that SPIHT based compression achieves better results as compared to JPEG for all compressions.
Abstract: In this paper, a comparative study of JPEG and SPIHT compression algorithms is presented. A set of objective picture quality measures like Peak Signal to Noise Ratio (PSNR), Maximum Difference (MD), Least Mean Square Error (LMSE), Structural Similarity Index (SSIM) and Picture Quality Scale(PQS) are used to measure the picture quality and comparison is done based upon the results of these quality measures. Different kind of standard test images are assessed with different compression ratios. SPIHT based compression achieves better results as compared to JPEG for all compressions.

Proceedings ArticleDOI
16 Dec 2009
TL;DR: FPGA based High speed, low complexity and low memory implementation of JPEG decoder is presented, allowing decompressing multiple image blocks simultaneously.
Abstract: The JPEG standard (ISO/IEC 10918-1 ITU-T Recommendation T.81) defines compression techniques for image data. As a consequence, it allows to store and transfer image data with considerably reduced demand for storage space and bandwidth. From the four processes provided in the JPEG standard, only one, the baseline process is widely used. In this paper FPGA based High speed, low complexity and low memory implementation of JPEG decoder is presented. The pipeline implementation of the system, allow decompressing multiple image blocks simultaneously. The hardware decoder is designed to operate at 100MHz on Altera Cyclon II or Xilinx Spartan 3E FPGA or equivalent. The decoder is capable of decoding Baseline JPEG color and gray images. Decoder is also capable of downscaling the image by 8. The decoder is designed to meet industrial needs. JFIF, DCF and EXIF standers are implemented in the design

Journal ArticleDOI
TL;DR: A new method of combining an integer wavelet transform with DPCM to compress medical images is discussed, which is simpler and useful and has high compression ratio in medical image lossless compression.
Abstract: To improve the classical lossless compression of low efficiency, a method of image lossless compression with high efficiency is presented. Its theory and the algorithm implementation are introduced. The basic approach of medical image lossless compression is then briefly described. After analyzing and implementing differential plus code modulation (DPCM) in lossless compression, a new method of combining an integer wavelet transform with DPCM to compress medical images is discussed. The analysis and simulation results show that this new method is simpler and useful. Moreover, it has high compression ratio in medical image lossless compression.

Proceedings ArticleDOI
06 Jun 2009
TL;DR: Considering temporal and spatial redundancies of electrocardiogram (ECG) data, an automatic lossless compression algorithm which is based on K-means cluster was proposed and evaluation result shows the proposed algorithm has better compression performance.
Abstract: It is very significant to telemedicine storage and transmission that compress large-scale dynamic electrocardiogram lossless. Considering temporal and spatial redundancies of electrocardiogram (ECG) data, an automatic lossless compression algorithm which is based on K-means cluster was proposed. Our algorithm was evaluated by MIT-BIH arrhythmia database, and evaluation result shows the proposed algorithm has better compression performance.

Proceedings ArticleDOI
TL;DR: Techniques and results focusing on exploring the capabilities of the spatially adaptive quantization syntax of the emerging JPEG XR standard are presented.
Abstract: JPEG XR, a new international standard for image coding, was approved as ITU-T Recommendation T.832 in March 2009, and as ISO/IEC international standard 29199-2 in July 2009. JPEG XR was designed based on Microsoft coding technology known as HD Photo. Since JPEG XR is an emerging new specification, exploration of advanced encoding techniques for JPEG XR is an important area of study. In order to advance understanding of JPEG XR and its capabilities, the development of enhanced encoding techniques for optimization of encoded JPEG XR perceptual image quality is particularly valuable. This paper presents techniques and results focusing on exploring the capabilities of the spatially adaptive quantization syntax of the emerging JPEG XR standard.

Proceedings ArticleDOI
19 Apr 2009
TL;DR: The experimental results show that the proposed technique produces better average lossless compression results than several other compression methods, including JPEG2000, JPEG-LS and JBIG, in a publicly available medical image database containing images from several modalities.
Abstract: This paper describes a lossless compression method for medical images that produces an embedded bit-stream, allowing progressive lossy-to-lossless decoding with L-infinity oriented rate-distortion. The experimental results show that the proposed technique produces better average lossless compression results than several other compression methods, including JPEG2000, JPEG-LS and JBIG, in a publicly available medical image database containing images from several modalities.

Proceedings ArticleDOI
06 May 2009
TL;DR: A simple lossy-to-lossless bit-plane coding of still images is presented to integrate several functionality extensions including selective tile partitioning, progressive transmission, ROI transmission, accuracy scalability, and others.
Abstract: A simple lossy-to-lossless bit-plane coding of still images is presented to integrate several functionality extensions including selective tile partitioning, progressive transmission, ROI transmission, accuracy scalability, and others. The mean squared error between the original image and a decoded image at any progression level is known prior to encoding/decoding. The proposed bit-plane codec is competitive with JPEG-LS and JPEG 2000 in the lossless compression of 8-bit grayscale and 24-bit color images. The codec outperforms the existing standards in 8-bit color-quantized image compression.

Proceedings ArticleDOI
28 Jun 2009
TL;DR: Almost lossless analog compression for analog memoryless sources in an information-theoretic framework, in which the compressor is not constrained to linear transformations but it satisfies various regularity conditions such as Lipschitz continuity.
Abstract: In Shannon theory, lossless source coding deals with the optimal compression of discrete sources. Compressed sensing is a lossless coding strategy for analog sources by means of multiplication by real-valued matrices. In this paper we study almost lossless analog compression for analog memoryless sources in an information-theoretic framework, in which the compressor is not constrained to linear transformations but it satisfies various regularity conditions such as Lipschitz continuity. The fundamental limit is shown to be the information dimension proposed by Renyi in 1959.

Patent
22 Jun 2009
TL;DR: In this paper, a system and method for compressed domain compression improve compression gains in an encoded image, such as a Joint Photographic Experts Group (JPEG)-encoded image or encoded video, without fully decoding and re-encoding the compressed images or video.
Abstract: A system and method for compressed domain compression improve compression gains in an encoded image, such as a Joint Photographic Experts Group (JPEG)-encoded images or encoded video, such as Motion Joint Photographic Experts Group (Motion JPEG)-encoded video, without fully decoding and re-encoding the compressed images or video.

Proceedings ArticleDOI
25 May 2009
TL;DR: This paper proposed a 4:4:4 lossless JPEG XR encoder design that can be used for the digital photography applications to achieve the low complexity of computation, low storage, and high dynamic range.
Abstract: With rapid progress of sensors, display devices, and computing engines, image application exists everywhere. High quality, high compression rates of digital image and low computational cost are important factors of consumer electronics. In this paper, we proposed a 4:4:4 lossless JPEG XR encoder design. In JPEG XR encoder, entropy coding is a critical module of encoder. We proposed a well-defined timing schedule of pipeline architecture to speed up the entropy encoding, which is the most computationally intensive part in JPEG XR encoder. This design can be used for the digital photography applications to achieve the low complexity of computation, low storage, and high dynamic range.

Proceedings ArticleDOI
17 Nov 2009
TL;DR: This paper found that there is no dependency in intra-macroblock data, and it could safely pipeline all the encoding processes including the entropy coding, and the proposed fully-pipelined architecture achieves 100 M pixel/sec at 125 MHz which could not be achieved by previous works.
Abstract: JPEG XR is an emerging image coding standard, based on HD Photo developed by Microsoft. It supports high compression performance twice as high as the de facto image coding system, namely JPEG, and also has an advantage over JPEG 2000 in terms of computational cost. JPEG XR is expected to be widespread for many devices including embedded systems in the near future. In this paper, we propose a novel architecture for JPEG XR encoding. In previous architectures, entropy coding was the throughput bottleneck because it was implemented as a sequential algorithm to handle data with dependency. We found that there is no dependency in intra-macroblock data, and we could safely pipeline all the encoding processes including the entropy coding. The proposed fully-pipelined architecture achieves 100 M pixel/sec at 125 MHz which could not be achieved by previous works.

Proceedings Article
01 Aug 2009
TL;DR: Three modifications of this coder are proposed to extend its capabilities to the lossless coding of colour images by using reversible colour transforms, and an adaptive decorrelation of the components is introduced.
Abstract: Next generations of still image codecs should not only have to be efficient in terms of compression ratio, but also propose other functionalities such as scalability, lossy and lossless capabilities, region-of-interest coding, etc. In previous works, we have proposed a scalable compression method called LAR, for Locally Adaptive Resolution, that covers these requirements. In particular, the Interleaved S+P scheme offers an efficient mean to compress images. In this paper, three modifications of this coder are proposed to extend its capabilities to the lossless coding of colour images. Firstly, decorrelation of the image components is introduced by using reversible colour transforms. Secondly, an adaptive decorrelation of the components is introduced. Finally, a classification between the image components is introduced. Results are then discussed and compared to the state of the art, thus revealing high compression performances of our coding solution.

Patent
09 Oct 2009
TL;DR: In this article, the authors introduce various techniques that improve the visual performance of JPEG XR without leaving the current codestream definition, and the modified encoder, while staying backwards compatible to the current standard proposition, improves visual performance significantly.
Abstract: Microsoft's recently proposed new image compression codec JPEG XR is currently undergoing ISO standardization as JPEG-XR. Even though performance measurements carried out by the JPEG committee indicated that the PSNR performance of JPEG XR is competitive, the visual performance of JPEG XR showed notable deficits, both in subjective and objective tests. This paper introduces various techniques that improve the visual performance of JPEG XR without leaving the current codestream definition. Objective measurements performed by the author indicate that the modified encoder, while staying backwards compatible to the current standard proposition, improves visual performance significantly, and the performance of the modified encoder is similar to JPEG.