scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 2003"


Journal ArticleDOI
Zhigang Fan1, R.L. de Queiroz1
TL;DR: A fast and efficient method is provided to determine whether an image has been previously JPEG compressed, and a method for the maximum likelihood estimation of JPEG quantization steps is developed.
Abstract: Sometimes image processing units inherit images in raster bitmap format only, so that processing is to be carried without knowledge of past operations that may compromise image quality (e.g., compression). To carry further processing, it is useful to not only know whether the image has been previously JPEG compressed, but to learn what quantization table was used. This is the case, for example, if one wants to remove JPEG artifacts or for JPEG re-compression. In this paper, a fast and efficient method is provided to determine whether an image has been previously JPEG compressed. After detecting a compression signature, we estimate compression parameters. Specifically, we developed a method for the maximum likelihood estimation of JPEG quantization steps. The quantizer estimation method is very robust so that only sporadically an estimated quantizer step size is off, and when so, it is by one value.

373 citations


Jan Lukás1
01 Jan 2003
TL;DR: It is explained in this paper, how double compression detection techniques and primary quantization matrix estimators can be used in steganalysis of JPEG files and in digital forensic analysis for detection of digital forgeries.
Abstract: In this report, we present a method for estimation of primary quantization matrix from a double compressed JPEG image. We first identify characteristic features that occur in DCT histograms of individual coefficients due to double compression. Then, we present 3 different approaches that estimate the original quantization matrix from double compressed images. Finally, most successful of them Neural Network classifier is discussed and its performance and reliability is evaluated in a series of experiments on various databases of double compressed images. It is also explained in this paper, how double compression detection techniques and primary quantization matrix estimators can be used in steganalysis of JPEG files and in digital forensic analysis for detection of digital forgeries.

353 citations


Journal ArticleDOI
TL;DR: In this paper, a 3D wavelet-based coding algorithm was proposed for medical volumetric data compression. But, the proposed algorithm is not suitable for medical applications and does not meet the requirements of quality and resolution scalability.
Abstract: Several techniques based on the three-dimensional (3-D) discrete cosine transform (DCT) have been proposed for volumetric data coding. These techniques fail to provide lossless coding coupled with quality and resolution scalability, which is a significant drawback for medical applications. This paper gives an overview of several state-of-the-art 3-D wavelet coders that do meet these requirements and proposes new compression methods exploiting the quadtree and block-based coding concepts, layered zero-coding principles, and context-based arithmetic coding. Additionally, a new 3-D DCT-based coding scheme is designed and used for benchmarking. The proposed wavelet-based coding algorithms produce embedded data streams that can be decoded up to the lossless level and support the desired set of functionality constraints. Moreover, objective and subjective quality evaluation on various medical volumetric datasets shows that the proposed algorithms provide competitive lossy and lossless compression results when compared with the state-of-the-art.

176 citations


Journal ArticleDOI
TL;DR: It is shown how down-sampling an image to a low resolution, then using JPEG at the lower resolution, and subsequently interpolating the result to the original resolution can improve the overall PSNR performance of the compression process.
Abstract: The most popular lossy image compression method used on the Internet is the JPEG standard. JPEG's good compression performance and low computational and memory complexity make it an attractive method for natural image compression. Nevertheless, as we go to low bit rates that imply lower quality, JPEG introduces disturbing artifacts. It is known that, at low bit rates, a down-sampled image, when JPEG compressed, visually beats the high resolution image compressed via JPEG to be represented by the same number of bits. Motivated by this idea, we show how down-sampling an image to a low resolution, then using JPEG at the lower resolution, and subsequently interpolating the result to the original resolution can improve the overall PSNR performance of the compression process. We give an analytical model and a numerical analysis of the down-sampling, compression and up-sampling process, that makes explicit the possible quality/compression trade-offs. We show that the image auto-correlation can provide a good estimate for establishing the down-sampling factor that achieves optimal performance. Given a specific budget of bits, we determine the down-sampling factor necessary to get the best possible recovered image in terms of PSNR.

168 citations


Journal ArticleDOI
TL;DR: A simple parallel algorithm for decoding a Huffman encoded file is presented, exploiting the tendency of Huffman codes to resynchronize quickly, i.e. recovering after possible decoding errors, in most cases.
Abstract: A simple parallel algorithm for decoding a Huffman encoded file is presented, exploiting the tendency of Huffman codes to resynchronize quickly, i.e. recovering after possible decoding errors, in most cases. The average number of bits that have to be processed until synchronization is analyzed and shows good agreement with empirical data. As Huffman coding is also a part of the JPEG image compression standard, the suggested algorithm is then adapted to the parallel decoding of JPEG files.

116 citations


Journal ArticleDOI
TL;DR: A compression technique is proposed which is based on motion compensation, optimal three-dimensional (3-D) linear prediction and context based Golomb-Rice entropy coding, which is compared with 3-D extensions of the JPEG-LS standard for still image compression.
Abstract: We consider the problem of lossless compression of video by taking into account temporal information. Video lossless compression is an interesting possibility in the line of production and contribution. We propose a compression technique which is based on motion compensation, optimal three-dimensional (3-D) linear prediction and context based Golomb-Rice (1966, 1979) entropy coding. The proposed technique is compared with 3-D extensions of the JPEG-LS standard for still image compression. A compression gain of about 0.8 bit/pel with respect to static JPEG-LS, applied on a frame-by-frame basis, is achievable at a reasonable computational complexity.

106 citations


Patent
Wei-ge Chen1, Chao He1
03 Sep 2003
TL;DR: In this article, mixed lossless audio compression has been proposed to combine lossy and lossless compression within a same audio signal, which can then be used for frames that exhibit poor compression performance.
Abstract: A mixed lossless audio compression has application to a unified lossy and lossless audio compression scheme that combines lossy and lossless audio compression within a same audio signal. The mixed lossless compression codes a transition frame between lossy and lossless coding frames to produce seamless transitions. The mixed lossless coding performs a lapped transform and inverse lapped transform to produce an appropriately windowed and folded pseudo-time domain frame, which can then be losslessly coded. The mixed lossless coding also can be applied for frames that exhibit poor lossy compression performance.

97 citations


Patent
Wei-ge Chen1, Chao He1
14 Jul 2003
TL;DR: In this paper, a unified lossy and lossless audio compression scheme was proposed, which combines lossy audio compression within a same audio signal, and employs mixed lossless coding of a transition frame between lossy coding frames to produce seamless transitions.
Abstract: A unified lossy and lossless audio compression scheme combines lossy and lossless audio compression within a same audio signal. This approach employs mixed lossless coding of a transition frame between lossy and lossless coding frames to produce seamless transitions. The mixed lossless coding performs a lapped transform and inverse lapped transform to produce an appropriately windowed and folded pseudo-time domain frame, which can then be losslessly coded. The mixed lossless coding also can be applied for frames that exhibit poor lossy compression performance.

87 citations


Journal ArticleDOI
TL;DR: The results show how model observers can be successfully used to perform automated evaluation and optimization of diagnostic performance in clinically relevant visual tasks using real anatomic backgrounds.
Abstract: We compared the ability of three model observers (nonprewhitening matched filter with an eye filter, Hotelling and channelized Hotelling) in predicting the effect of JPEG and wavelet-Crewcode image compression on human visual detection of a simulated lesion in single frame digital x-ray coronary angiograms. All three model observers predicted the JPEG superiority present in human performance, although the nonprewhitening matched filter with an eye filter (NPWE) and the channelized Hotelling models were better predictors than the Hotelling model. The commonly used root mean square error and related peak signal to noise ratio metrics incorrectly predicted a JPEG inferiority. A particular image discrimination/perceptual difference model correctly predicted a JPEG advantage at low compression ratios but incorrectly predicted a JPEG inferiority at high compression ratios. In the second part of the paper, the NPWE model was used to perform automated simulated annealing optimization of the quantization matrix of the JPEG algorithm at 25:1 compression ratio. A subsequent psychophysical study resulted in improved human detection performance for images compressed with the NPWE optimized quantization matrix over the JPEG default quantization matrix. Together, our results show how model observers can be successfully used to perform automated evaluation and optimization of diagnostic performance in clinically relevant visual tasks using real anatomic backgrounds.

86 citations


Proceedings ArticleDOI
06 Jul 2003
TL;DR: A novel content-based image authentication framework which embeds the authentication information into the host image using a lossless data hiding approach and can tolerate JPEG compression to a certain extent while rejecting common tampering to the image.
Abstract: In this paper, we present a novel content-based image authentication framework which embeds the authentication information into the host image using a lossless data hiding approach. In this framework the features of a target image are first extracted and signed using the digital signature algorithm (DSA). The authentication information is generated from the signature and the features are then inserted into the target image using a lossless data hiding algorithm. In this way, the unperturbed version of the original image can be obtained after the embedded data are extracted. An important advantage of our approach is that it can tolerate JPEG compression to a certain extent while rejecting common tampering to the image. The experimental results show that our framework works well with JPEG quality factors greater than or equal to 80 which are acceptable for most authentication applications.

61 citations


Patent
Chen Wei-Ge1, Chao He1
14 Jul 2003
TL;DR: In this paper, a lossless audio compression scheme is adapted for use in a unified lossy and lossless compression scheme, where the adaptive filter is varied based on transient detection, such as increasing the adaptation rate where a transient is detected.
Abstract: A lossless audio compression scheme is adapted for use in a unified lossy and lossless audio compression scheme. In the lossless compression, the adaptation rate of an adaptive filter is varied based on transient detection, such as increasing the adaptation rate where a transient is detected. A multi-channel lossless compression uses an adaptive filter that processes samples from multiple channels in predictive coding a current sample in a current channel. The lossless compression also encodes using an adaptive filter and Golomb coding with non-power of two divisor.

01 Jan 2003
TL;DR: This paper focuses on compression of video or picture data, a field in which data of vast sizes are processed and the amount of information stored in databases grows fast, while their contents often exhibit much redundancy.
Abstract: 4 Improved compression algorithm based on the Burrows–Wheeler transform 61 4.1 Modifications of the basic version of the compression algorithm. 61 5 Conclusions 141 iii Acknowledgements 145 Bibliography 147 Appendices 161 A Silesia corpus 163 B Implementation details 167 C Detailed options of examined compression programs 173 D Illustration of the properties of the weight functions 177 E Detailed compression results for files of different sizes and similar contents 185 List of Symbols and Abbreviations 191 List of Figures 195 List of Tables 198 Index 200 Chapter 1 Preface I am now going to begin my story (said the old man), so please attend. Contemporary computers process and store huge amounts of data. Some parts of these data are excessive. Data compression is a process that reduces the data size, removing the excessive information. Why is a shorter data sequence often more suitable? The answer is simple: it reduces the costs. A full-length movie of high quality could occupy a vast part of a hard disk. The compressed movie can be stored on a single CD-ROM. Large amounts of data are transmitted by telecommunication satellites. Without compression we would have to launch many more satellites that we do to transmit the same number of television programs. The capacity of Internet links is also limited and several methods reduce the immense amount of transmitted data. Some of them, as mirror or proxy servers, are solutions that minimise a number of transmissions on long distances. The other methods reduce the size of data by compressing them. Multimedia is a field in which data of vast sizes are processed. The sizes of text documents and application files also grow rapidly. Another type of data for which compression is useful are database tables. Nowadays, the amount of information stored in databases grows fast, while their contents often exhibit much redundancy. Data compression methods can be classified in several ways. One of the most important criteria of classification is whether the compression algorithm 1 2 CHAPTER 1. PREFACE removes some parts of data which cannot be recovered during the decompres-sion. The algorithms removing irreversibly some parts of data are called lossy, while others are called lossless. The lossy algorithms are usually used when a perfect consistency with the original data is not necessary after the decom-pression. Such a situation occurs for example in compression of video or picture data. If the recipient of the video …

Journal ArticleDOI
G. Lakhani1
TL;DR: A minor modification to the Huffman coding of the JPEG baseline compression algorithm is presented, which all move the end-of-block marker up in the middle of DCT block and use it to indicate the band boundaries.
Abstract: It is a well observed characteristic that when a DCT block is traversed in the zigzag order, the AC coefficients generally decrease in size and the run-length of zero coefficients increase in number This article presents a minor modification to the Huffman coding of the JPEG baseline compression algorithm to exploit this redundancy For this purpose, DCT blocks are divided into bands so that each band can be coded using a separate code table Three implementations are presented, which all move the end-of-block marker up in the middle of DCT block and use it to indicate the band boundaries Experimental results are presented to compare reduction in the code size obtained by our methods with the JPEG sequential-mode Huffman coding and arithmetic coding methods The average code reduction to the total image code size of one of our methods is 4% Our methods can also be used for progressive image transmission and hence, experimental results are also given to compare them with two-, three-, and four-band implementations of the JPEG spectral selection method

Journal ArticleDOI
TL;DR: Evaluation of the compatibility aspects of JPEG 2000 versus JPEG standard with watermarking provides evaluation of the performance of both standards under various conditions.

Journal ArticleDOI
TL;DR: From the results, it can be found that there could be a significant decrease in image quality when compression is over 35‐fold, and the accuracy of image classification is dramatically deteriorated, however, when the compression ratio is smaller than 35‐ fold, the deterioration of classification accuracy is linear.
Abstract: With the improvement of spatial resolution, data volume has become an increasingly significant concern, as the shear volume of data is expensive and inefficient in terms of data transmission, proce...

Proceedings ArticleDOI
26 Mar 2003
TL;DR: JPG2000 compression is more acceptable than, and superior to, JPEG in lossy compression, and the more conventional JPEG standards are compared.
Abstract: Due to the constraints on bandwidth and storage capacity, medical images must be compressed before transmission and storage. However, when the image is compressed, especially at lower bit rates, the image fidelity is reduced, a situation which cannot be tolerated in the medical field. The paper studies the compression performance of the new JPEG2000 and the more conventional JPEG standards. The parameters compared include the compression efficiency, peak signal-to-noise ratio (PSNR), picture quality scale (PQS), and mean opinion score (MOS). Three types of medical images are used - X-ray, magnetic resonance imaging (MRI) and ultrasound. Overall, the study shows that JPEG2000 compression is more acceptable than, and superior to, JPEG in lossy compression.

Journal ArticleDOI
TL;DR: DTs are seen to offer significant improvement in performance over the fixed-architecture TSBN and in a coding comparison the DT achieves 0.294 bits per pixel (bpp) compression compared to 0.378 bpp for lossless JPEG on images of seven colours.

Journal ArticleDOI
TL;DR: The results indicate that the level-embedded compression incurs only a small penalty in compression efficiency over non- scalable lossless compression, while offering the significant benefit of level-scalability.
Abstract: A level-embedded lossless compression method for continuous-tone still images is presented. Level (bit-plane) scalability is achieved by separating the image into two layers before compression and excellent compression performance is obtained by exploiting both spatial and inter-level correlations. A comparison of the proposed scheme with a number of scalable and non-scalable lossless image compression algorithms is performed to benchmark its performance. The results indicate that the level-embedded compression incurs only a small penalty in compression efficiency over non-scalable lossless compression, while offering the significant benefit of level-scalability.

Journal Article
TL;DR: The paper describes the basic elements of the codec, points out envisaged applications, and gives an outline of the standardization process.
Abstract: Lossless coding will become the latest extension of the MPEG-4 audio standard. In response to a call for proposals, many companies have submitted lossless audio codecs for evaluation. The codec of the Technical University of Berlin was chosen as reference model for MPEG-4 Audio Lossless Coding, attaining working draft status in July 2003. The encoder is based on linear prediction, which enables high compression even with moderate complexity, while the corresponding decoder is straightforward. The paper describes the basic elements of the codec, points out envisaged applications, and gives an outline of the standardization process.

Journal ArticleDOI
TL;DR: A set of quantitative measurements related to medical image quality parameters is proposed for compression assessment, which provides information regarding the type of loss, offering cost and time benefits, in parallel with the advantages of test image adaptation to the requirements of a certain imaging modality and clinical study.

Proceedings ArticleDOI
16 Jun 2003
TL;DR: This work focuses on adaptive linear predictors used in lossless image coding and proposes a new linear prediction method, which has proven to give very good results as a decorrelation tool in Lossless image compression.
Abstract: Natural, continuous tone images have the very important property of high correlation of adjacent pixels. This property is cleverly exploited in lossless image compression where, prior to the statistical modeling and entropy coding step, predictive coding is used as a decorrelation tool. The use of prediction for the current pixel also reduces the cost of the applied statistical model for entropy coding. Linear prediction, where the predicted value is a linear function of previously encoded pixels (causal template), has proven to give very good results as a decorrelation tool in lossless image compression. We concentrate on adaptive linear predictors used in lossless image coding and propose a new linear prediction method.

01 Jan 2003
TL;DR: This paper presents a lossless compression of volumetric medical images with the improved 3-D SPIHT algorithm that searches on asymmetric trees that can easily apply different numbers of decompositions between the transaxial and axial dimensions.
Abstract: This paper presents a lossless compression of volumetric medical images with the improved 3-D SPIHT algorithm that searches on asymmetric trees. The tree structure links wavelet coefficients produced by three-dimensional reversible integer wavelet transforms. Experiments show that the lossless compression with the improved 3-D SPIHT gives improvement about 42% on average over two-dimensional techniques, and is superior to those of prior results of three-dimensional techniques. In addition to that, we can easily apply different numbers of decompositions between the transaxial and axial dimensions, which is a desirable function when the coding unit of a group of slices is limited in size.

Proceedings ArticleDOI
24 Nov 2003
TL;DR: This work estimates the discrete cosine transform coefficient histograms of previously JPEG-compressed images and shows that JPEG recompression performed by exploiting the estimated CH strikes an excellent file-size versus distortion tradeoff.
Abstract: We routinely encounter digital color images that were previously JPEG-compressed. We aim to retrieve the various settings - termed JPEG compression history (CH) - employed during previous JPEG operations. This information is often discarded en-route to the image's current representation. The discrete cosine transform coefficient histograms of previously JPEG-compressed images exhibit near-periodic behavior due to quantization. We propose a statistical approach to exploit this structure and thereby estimate the image's CH. Using simulations, we first demonstrate the accuracy of our estimation. Further, we show that JPEG recompression performed by exploiting the estimated CH strikes an excellent file-size versus distortion tradeoff.

Proceedings ArticleDOI
21 Jul 2003
TL;DR: The SAMVQ outperforms JPEG 2000 in both spatial and spectral features preservation and outperforms PSNR by 17 dB of PSNR at the same compression ratios.
Abstract: This paper evaluates and compares JPEG 2000 and Successive Approximation Multi-stage Vector Quantization (SAMVQ) compression algorithms for hyperspectral imagery. PSNR was used to measure the statistical performance of the two compression algorithms. The SAMVQ outperforms JPEG 2000 by 17 dB of PSNR at the same compression ratios. The preservation of both spatial and spectral features was evaluated qualitatively and quantitatively. The SAMVQ outperforms JPEG 2000 in both spatial and spectral features preservation.

Proceedings ArticleDOI
25 Jun 2003
TL;DR: A new error resilient coding scheme for JPEG image transmission based on data embedding and side-match vector quantization (VQ) is proposed that can recover high-quality JPEG images from the corresponding corrupted images up to a block loss rate of 30%.
Abstract: For an entropy-coded Joint Photographic Experts Group (JPEG) image, a transmission error in a codeword will not only affect the underlying codeword but also may affect subsequent codewords, resulting in a great degradation of the received image. In this study, a new error resilient coding scheme for JPEG image transmission based on data embedding and side-match vector quantization (VQ) is proposed. To cope with the synchronization problem, the restart capability of JPEG images is enabled. The objective of the proposed scheme is to recover high-quality JPEG images from the corresponding corrupted images. At the encoder, the important data (the codebook index) for each Y (U or V) block in a JPEG image are extracted and embedded into another "masking" Y (U or V) block in the image by the odd-even data embedding scheme. At the decoder, after all the corrupted blocks within a JPEG image are detected and located, if the important data for a corrupted block can be extracted correctly from the corresponding "masking" block, the extracted important data will be used to conceal, the corrupted block; otherwise, the side-match VQ technique is employed to conceal the corrupted block. Based on the simulation results, the performance of the proposed scheme is better than that of five existing approaches for comparison. The proposed scheme can recover high-quality JPEG images from the corresponding corrupted images up to a block loss rate of 30%.

Journal Article
TL;DR: An estimation of the potential benefits of JPEG 2000 relative to the JPEG standard with respect to compression quality is presented, and in particular with regard to the requirements of the graphic arts industry.
Abstract: The upcoming JPEG 2000 image compression standard is expected to replace the well-established JPEG format. It offers a wide range of data compressions, from lossless up to highest rates of lossy compression, in conjunction with a variety of useful new features which will presumably increase its popularity. Confident press reports suggest breakthroughs in image compression prospects. However, in the final analysis the most decisive factor in JPEG 2000's commercial acceptance will be its visual quality attributes. Accordingly, this article presents extensive estimations of the visually perceived image quality of JPEG 2000 compressions in comparison with JPEG. The study focuses exclusively on lossy data compression and its default settings. From the viewpoint of quality assessments commonly used in graphic arts industry, we present different types of interactive quality ratings applied to large data sets for characterizing image distortions at varying compression rates. In addition, the results are compared with PSNR considerations. The objective of this study is to arrive at an estimation of the potential benefits of JPEG 2000 relative to the JPEG standard with respect to compression quality, and in particular with regard to the requirements of the graphic arts industry.

Journal Article
TL;DR: The paper overviews the algorithm of JPEG2000 at first, then expounds how to code region-of-interest, then the result of region- of-interest coding is given.
Abstract: JPEG2000 is not only intend to provide subjective image quality performance superior to the existing standards, such as JPEG, but also to provide more features and functionalities that current standards can neither address efficiently nor address at all. It is an important features that region-of-interest can be encoded with the high quality or lossless compression. The paper overviews the algorithm of JPEG2000 at first, then expounds how to code region-of-interest , finally gives the result of region-of-interest coding.

Proceedings ArticleDOI
24 Nov 2003
TL;DR: An algorithm, which allows for embedded coding in L/sub /spl infin// sense, i.e., progressive near-lossless as well as lossless image compression as a basis for the lossy layer, and it is shown that this approach allows for a better image quality and compression performance for large tolerance values than algorithms based on predictive coding.
Abstract: In this paper, we propose an algorithm, which allows for embedded coding in L/sub /spl infin// sense, i.e., progressive near-lossless as well as lossless image compression. The method is based on a lossy plus near-lossless refinement layered compression scheme. As a basis for the lossy layer we use the JPEG2000 standard. We show that this approach allows for a better image quality and compression performance for large tolerance values than algorithms based on predictive coding. The compression performance of the algorithm in the lossless mode is about the same as that of the JPEG2000 standard. Another advantage of this technique is that it allows for all the benefits and functionality of the lossy compression algorithm (JPEG2000) at low bit rates.

Proceedings ArticleDOI
15 Dec 2003
TL;DR: The performance of the L-HLT is found to be better than the DCT for near lossless image compression and wavelet-based filter bank approach for lossless compression.
Abstract: It is shown that the discrete Hartley transform (DHT) of length N = 4 can be used to perform an integer-to-integer transformation. This behavior of the DHT can be used to compute a 2-D separable lossless Hartley like transform (L-HLT). The performance of the L-HLT is found to be better than the DCT for near lossless image compression and wavelet-based filter bank approach for lossless compression.

Proceedings ArticleDOI
24 Nov 2003
TL;DR: This paper proposes an efficient method for lossless coding of gray level edge images, which are frequently used in image processing and can provide for this type of images 20-78% better compression when compared with the best available methods, and 19% better compressed when compared to JPEG-LS (arithmetic version).
Abstract: This paper proposes an efficient method for lossless coding of gray level edge images, which are frequently used in image processing. The method consists of analyzing the edge image in connected sets of pixels by means of chain coding as a function of its starting points, their relative intensities and their displacements within these sets. In order to avoid the redundant information a DPCM coder is used. Then the pixel intensities, the costly information, are coded by an arithmetic encoder, and the rest (coordinates, displacements and movements) by a Huffman encoder, which is less demanding in terms of hardware and software than the first one. For this type of images, the proposed method provides lossless compression rates higher than the commonly used lossless methods. Results show that the proposed method can provide for this type of images 20-78% better compression when compared with the best available methods, and 19% better compression when compared with JPEG-LS (arithmetic version).