scispace - formally typeset
Search or ask a question

Showing papers on "JPEG published in 2004"


Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations


Book ChapterDOI
23 May 2004
TL;DR: In this article, a feature-based steganalytic method for JPEG images is proposed, where the features are calculated as an L 1 norm of the difference between a specific macroscopic functional calculated from the stego image and the same functional obtained from a decompressed, cropped, and recompressed stegos image.
Abstract: In this paper, we introduce a new feature-based steganalytic method for JPEG images and use it as a benchmark for comparing JPEG steganographic algorithms and evaluating their embedding mechanisms. The detection method is a linear classifier trained on feature vectors corresponding to cover and stego images. In contrast to previous blind approaches, the features are calculated as an L1 norm of the difference between a specific macroscopic functional calculated from the stego image and the same functional obtained from a decompressed, cropped, and recompressed stego image. The functionals are built from marginal and joint statistics of DCT coefficients. Because the features are calculated directly from DCT coefficients, conclusions can be drawn about the impact of embedding modifications on detectability. Three different steganographic paradigms are tested and compared. Experimental results reveal new facts about current steganographic methods for JPEGs and new de-sign principles for more secure JPEG steganography.

508 citations


Book
18 Oct 2004
TL;DR: This paper presents VLSI Architectures for Discrete Wavelet Transforms and Coding Algorithms in JPEG 2000, a guide to data compression techniques used in the development of JPEG 2000.
Abstract: Preface1 Introduction to Data Compression2 Source Coding Algorithms3 JPEG-Still Image Compression Standard4 Introduction to Discrete Wavelet Transform5 VLSI Architectures for Discrete Wavelet Transforms6 JPEG 2000 Standard7 Coding Algorithms in JPEG 20008 Code Stream Organization and File Format9 VLSI Architectures for JPEG 200010 Beyond Part 1 of JPEG 2000IndexAbout the Authors

347 citations


Journal Article
TL;DR: A new feature-based steganalytic method for JPEG images that is a linear classifier trained on feature vectors corresponding to cover and stego images and used as a benchmark for comparing JPEG steganographic algorithms and evaluating their embedding mechanisms.
Abstract: In this paper, we introduce a new feature-based steganalytic method for JPEG images and use it as a benchmark for comparing JPEG steganographic algorithms and evaluating their embedding mechanisms. The detection method is a linear classifier trained on feature vectors corresponding to cover and stego images. In contrast to previous blind approaches, the features are calculated as an L 1 norm of the difference between a specific macroscopic functional calculated from the stego image and the same functional obtained from a decompressed, cropped, and recompressed stego image. The functionals are built from marginal and joint statistics of DCT coefficients. Because the features are calculated directly from DCT coefficients, conclusions can be drawn about the impact of embedding modifications on detectability. Three different steganographic paradigms are tested and compared. Experimental results reveal new facts about current steganographic methods for JPEGs and new design principles for more secure JPEG steganography.

267 citations


Proceedings ArticleDOI
20 Sep 2004
TL;DR: A new approach to passive-warden steganography in which the sender embeds the secret message into a certain subset of the cover object without having to share the selection channel with the recipient is introduced.
Abstract: In this paper, we introduce a new approach to passive-warden steganography in which the sender embeds the secret message into a certain subset of the cover object without having to share the selection channel with the recipient. An appropriate information-theoretical model for this communication is writing in memory with (a large number of) defective cells [1]. We describe a simple variable-rate random linear code for this channel (the "wet paper" code) and use it to develop a new steganographic methodology for digital media files - Perturbed Quantization. In Perturbed Quantization, the sender hides data while processing the cover object with an information-reducing operation, such as lossy compression, downsampling, A/D conversion, etc. The sender uses the cover object before processing as side information to confine the embedding changes to those elements of the processed cover object whose values are the most "uncertain". This informed-sender embedding and uninformed-recipient message extraction improves steganographic security because an attacker cannot easily determine from the processed stego object the location of embedding changes. Heuristic is presented and supported by blind steganalysis [2] that a specific case of Perturbed Quantization for JPEG images is significantly less detectable than current JPEG steganographic methods.

174 citations


30 Mar 2004

131 citations


Journal Article
TL;DR: The results show that some objective measures correlate well with the perceived picture quality for a given compression algorithm but they are not reliable for an evaluation across dieren t algorithms, and objective measures are found, which serve well in all tested image compression systems.
Abstract: This paper investigates a set of objective picture quality measures for application in still image compression systems and emphasizes the correlation of these measures with subjective picture quality measures. Picture quality is measured using nine dieren t objective picture quality measures and subjectively using Mean Opinion Score (MOS ) as measure of perceived picture quality. The correlation between each objective measure and MOS is found. The eects of dieren t image compression algorithms, image contents and compression ratios are assessed. Our results show that some objective measures correlate well with the perceived picture quality for a given compression algorithm but they are not reliable for an evaluation across dieren t algorithms. So, we compared objective picture quality measures across dieren t algorithms and we found measures, which serve well in all tested image compression systems. K e y w o r d s: correlation, JPEG, JPEG2000, objective assessment, picture quality measures, SPIHT With the increasing use of multimedia technologies, image compression requires higher performance. To address needs and requirements of multimedia and Internet applications, many ecien t image compression techniques, with considerably dieren t features, have recently been developed. Image compression techniques exploit a common characteristic of most images that the neighboring picture elements (pixels, pels) are highly correlated [1]. It means that a typical still image contains a large amount of spatial redundancy in plain areas where adjacent pixels have almost the same values. In addition, still image can contain subjective redundancy, which is determined by properties of human visual system (HVS). HVS presents some tolerance to distortion depending upon the image content and viewing conditions. Consequently, pixels must not always be reproduced exactly as originated and HVS will not detect the dierence between original image and reproduced image [2]. The redundancy (both statistical and subjective) can be removed to achieve compression of the image data. The basic measures for the performance of a compression system are picture quality and compression ratio (dened as ratio between original data size and compressed data size). In lossy compression scheme, image compression algorithm should achieve trade o between compression ratio and picture quality. Higher compression ratios will produce lower picture quality and vice versa. The evaluation of lossless image compression techniques is a simple task where compression ratio and execution time are employed as standard criteria. The picture quality before and after compression is unchanged. Contrary, the evaluation of lossy techniques is dicult task because of inherent drawbacks associated with both objective and subjective measures of picture quality. Objective measures of picture quality do not correlate well with subjective quality measures [3], [4]. Subjective assessment of picture quality is time consuming process and results of measurements should be processed very carefully. In many applications (photos, medical images where loss is tolerated, network applications, World Wide Web, etc.) it is very important to choose image compression system which gives the best subjective quality, but the quality has to be evaluated objectively. Therefore, it is important to use objective picture quality measure, which has high correlation with subjective picture quality. In this paper we attempt to evaluate and compare objective and subjective picture quality measures. As test images we used images with dieren t spatial and frequency characteristics. Images are coded using JPEG, JPEG2000 and SPIHT compression algorithms. The paper is structured as follows. In section 2 we dene picture quality measures. In section 3 we briey present image compression systems used in our experiment. In Section 4 we evaluate statistical and frequency properties of test images. Section 5 contains numerical results of picture quality measures. In this section we analyze correlation of objective measures with subjective grades and we propose objective measures, which should be used in relation to each image compression system, and objective measures, which are suitable for the comparison of picture quality between dieren t compression systems.

129 citations


Journal ArticleDOI
TL;DR: Experiments with visually impaired patients show improved perceived image quality at moderate levels of enhancement but rejection of artifacts caused by higher levels of Enhancement, suggesting a need for further research into this area.
Abstract: An image enhancement algorithm for low-vision patients was developed for images compressed using the JPEG standard. The proposed algorithm enhances the images in the discrete cosine transform domain by weighting the quantization table in the decoder. Our specific implementation increases the contrast at all bands of frequencies by an equal factor. The enhancement algorithm has four advantages: 1) low computational cost; 2) suitability for real-time application; 3) ease of adjustment by end-users (for example, adjusting a single parameter); and 4) less severe block artifacts as compared with conventional (post compression) enhancements. Experiments with visually impaired patients show improved perceived image quality at moderate levels of enhancement but rejection of artifacts caused by higher levels of enhancement.

107 citations


Journal ArticleDOI
01 Dec 2004
TL;DR: A fragile watermarking scheme is proposed to implicitly watermark all the coefficients by registering the zero-valued coefficients with a key-generated binary sequence to create the watermark and involving the unwatermarkable coefficients during the embedding process of the embeddable ones.
Abstract: It is a common practice in transform-domain fragile watermarking schemes for authentication purposes to watermark some selected transform coefficients so as to minimise embedding distortion. The author points out that leaving most of the coefficients unmarked results in a wide-open security gap for attacks to be mounted on them. A fragile watermarking scheme is proposed to implicitly watermark all the coefficients by registering the zero-valued coefficients with a key-generated binary sequence to create the watermark and involving the unwatermarkable coefficients during the embedding process of the embeddable ones. Non-deterministic dependence is established by involving some of the unwatermarkable coefficients selected according to the watermark from a nine-neighbourhood system in order to thwart different attacks, such as cover-up, vector quantisation and transplantation. No hashing is needed in establishing the non-deterministic dependence.

104 citations


Proceedings ArticleDOI
07 Aug 2004
TL;DR: This paper proposes a simple approach to HDR encoding that parallels the evolution of color television from its grayscale beginnings, using various published global and local tone-mapping operators to generate the foreground images.
Abstract: The transition from traditional 24-bit RGB to high dynamic range (HDR) images is hindered by excessively large file formats with no backwards compatibility. In this paper, we propose a simple approach to HDR encoding that parallels the evolution of color television from its grayscale beginnings. A tone-mapped version of each HDR original is accompanied by restorative information carried in a subband of a standard 24-bit RGB format. This subband contains a compressed ratio image, which when multiplied by the tone-mapped foreground, recovers the HDR original. The tone-mapped image data may be compressed, permitting the composite to be delivered in a standard JPEG wrapper. To naive software, the image looks like any other, and displays as a tone-mapped version of the original. To HDR-enabled software, the foreground image is merely a tone-mapping suggestion, as the original pixel data are available by decoding the information in the subband. We present specifics of the method and the results of encoding a series of synthetic and natural HDR images, using various published global and local tone-mapping operators to generate the foreground images. Errors are visible in only a very small percentage of the pixels after decoding, and the technique requires only a modest amount of additional space for the subband data, independent of image size.

98 citations


Proceedings ArticleDOI
05 Apr 2004
TL;DR: A semi-fragile watermarking scheme which embeds a watermark in the quantized DCT domain, which is tolerant to JPEG compression to a pre-determined lowest quality factor, but is sensitive to all other malicious attacks, either in spatial or transform domains.
Abstract: With the increasing popularity of JPEG images, a need arises to devise effective watermarking techniques which consider JPEG compression as an acceptable manipulation. In this paper, we present a semi-fragile watermarking scheme which embeds a watermark in the quantized DCT domain. It is tolerant to JPEG compression to a pre-determined lowest quality factor, but is sensitive to all other malicious attacks, either in spatial or transform domains. Feature codes are extracted based on the relative sign and magnitudes of coefficients, and these are invariant due to an important property of JPEG compression. The employment of a nine-neighborhood mechanism ensures that non-deterministic block-wise dependence is achieved. Analysis and experimental results are provided to support the effectiveness of the scheme.

Journal ArticleDOI
TL;DR: A new image fusion technique based on a contrast measure defined in the DCT domain is presented and it is shown that there is no difference in visual quality between the fused image obtained by the algorithm and that obtained by a wavelet transform based image fused technique.

Journal ArticleDOI
TL;DR: The text data is encrypted before interleaving with images in the frequency domain to ensure greater security and the graphical signals are also interleaved with the image.


Journal ArticleDOI
TL;DR: A novel high capacity data hiding method based on JPEG that can achieve an impressively high embedding capacity of around 20% of the compressed image size with little noticeable degradation of image quality is proposed.
Abstract: The JPEG image is the most popular file format in relation to digital images. However, up to the present time, there seems to have been very few data hiding techniques taking the JPEG image into account. In this paper, we shall propose a novel high capacity data hiding method based on JPEG. The proposed method employs a capacity table to estimate the number of bits that can be hidden in each DCT component so that significant distortions in the stego-image can be avoided. The capacity table is derived from the JPEG default quantization table and the Human Visual System (HVS). Then, the adaptive least-significant bit (LSB) substitution technique is employed to process each quantized DCT coefficient. The proposed data hiding method enables us to control the level of embedding capacity by using a capacity factor. According to our experimental results, our new scheme can achieve an impressively high embedding capacity of around 20% of the compressed image size with little noticeable degradation of image quality.

Book ChapterDOI
13 Sep 2004
TL;DR: A method to attack the model-based steganography scheme on the basis of first order statistics is presented, which shows a good detection ratio for a large test set of typical JPEG images and possible implications for improved embedding functions are discussed.
Abstract: The recent approach of a model-based framework for steganography fruitfully contributes to the discussion on the security of steganography. In addition, the first proposal for an embedding algorithm constructed under the model-based paradigm reached remarkable performance in terms of capacity and security. In this paper, we review the emerging of model-based steganography in the context of decent steganalysis as well as from theoretical considerations, before we present a method to attack the above-mentioned scheme on the basis of first order statistics. Experimental results show a good detection ratio for a large test set of typical JPEG images. The attack is successful because of weaknesses in the model and does not put into question the generalised theoretical framework of model-based steganography. So we discuss possible implications for improved embedding functions.

Journal ArticleDOI
TL;DR: It is shown that a well-known and widely-available MPEG-2 scheme can be a good alternative for II compression, and several scanning topologies along the elemental image sequences are introduced.
Abstract: In this paper, we discuss the compression results of full color 3D Integral Images (II) by MPEG-2 (Motion Picture Experts Group). II is a popular three-dimensional image video recording and display technique. The huge size of II data has become a practical issue for storing and transmitting of 3D scenes. The MPEG is a standard coded representation of moving pictures. We model the elemental images in II as consecutive frames in a moving picture. Therefore, MPEG scheme can be applied to take advantage of the high cross-correlations between elemental images. We also introduce several scanning topologies along the elemental image sequences and investigate their performance with different number of pictures in GOP (Group of Picture). Experimental results are presented to illustrate the image quality of the MPEG-2 and the baseline JPEG with the same compression rate. We show that a well-known and widely-available MPEG-2 scheme can be a good alternative for II compression.

Journal ArticleDOI
TL;DR: The methodology presented here is based on the known SCAN formal language for data accessing and processing and produces a lossless compression ratio of 1.88 for the standard Lenna, while the hiding part is able to embeds digital information at 12.5% of the size of the original image.

Proceedings ArticleDOI
02 Nov 2004
TL;DR: It is shown that the MOS predictions by the proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images.
Abstract: This paper describes an objective comparison of the image quality of different encoders. Our approach is based on estimating the visual impact of compression artifacts on perceived quality. We present a tool that measures these artifacts in an image and uses them to compute a prediction of the Mean Opinion Score (MOS) obtained in subjective experiments. We show that the MOS predictions by our proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images. For the encoder comparison, we compress a set of 29 test images with two JPEG encoders (Adobe Photoshop and IrfanView) and three JPEG2000 encoders (JasPer, Kakadu, and IrfanView) at various compression ratios. We compute blockiness, blur, and MOS predictions as well as PSNR of the compressed images. Our results show that the IrfanView JPEG encoder produces consistently better images than the Adobe Photoshop JPEG encoder at the same data rate. The differences between the JPEG2000 encoders in our test are less pronounced; JasPer comes out as the best codec, closely followed by IrfanView and Kakadu. Comparing the JPEG- and JPEG2000-encoding quality of IrfanView, we find that JPEG has a slight edge at low compression ratios, while JPEG2000 is the clear winner at medium and high compression ratios.

Proceedings ArticleDOI
05 Jan 2004
TL;DR: A semifragile watermarking scheme for image authentication is proposed, which addresses the issue of protecting images from illegal manipulations and modifications and is tolerant of lossy compression such as JPEG, but malicious changes of the image will result in the breach of the watermark detection.
Abstract: In this paper, a semifragile watermarking scheme for image authentication is proposed, which addresses the issue of protecting images from illegal manipulations and modifications. The scheme extracts a signature (watermark) from the original image and inserts this signature back into the image, avoiding additional signature files. The error correction coding (ECC) is used to encode the signatures that are extracted from the image. To increase the security of this scheme, user's private key is employed for encryption and decryption of the watermark during watermark extraction and insertion procedures. Experimental result shows that if there is no change in the obtained image, the watermark will be correctly extracted, thus will pass through the authentication system. This scheme is tolerant of lossy compression such as JPEG, but malicious changes of the image will result in the breach of the watermark detection. In addition, this scheme can detect the exact locations, which are illegal modified blocks.

DOI
14 Jul 2004
TL;DR: An image encryption algorithm combining with JPEG encoding is proposed that supports direct bit-rate control or recompression, which means that the encrypted image can still be decrypted correctly even if its compression ratio has been changed.
Abstract: Image encryption is a suitable method to protect image data. The encryption algorithms based on position confusion and pixel substitution change compression ratio greatly. In this paper, an image encryption algorithm combining with JPEG encoding is proposed. In luminance and chrominance plane, the DCT blocks are confused by pseudo-random SFCs (space filling curves). In each DCT block, DCT coefficients are confused according to different frequency bands and their signs are encrypted by a chaotic stream cipher. The security of the cryptosystem against brute-force attack and known-plaintext attack is also analyzed. Experimental results show that, the algorithm is of high security and low cost. What's more, it supports direct bit-rate control or recompression, which means that the encrypted image can still be decrypted correctly even if its compression ratio has been changed. These advantages make it suitable for image transmission over network.

Proceedings ArticleDOI
16 Aug 2004
TL;DR: Lossless and lossy compression algorithms for microarray images originally digitized at 16 bpp (bits per pixels) that achieve an average of 9.5-11.5 bpp and 4.6-6.7 bpp are proposed, based on a completely automatic gridding procedure of the image.
Abstract: With the recent explosion of interest in microarray technology, massive amounts of microarray images are currently being produced. The storage and the transmission of this type of data are becoming increasingly challenging. Here we propose lossless and lossy compression algorithms for microarray images originally digitized at 16 bpp (bits per pixels) that achieve an average of 9.5-11.5 bpp (lossless) and 4.6-6.7 bpp (lossy, with a PSNR of 63 dB). The lossy compression is applied only on the background of the image, thereby preserving the regions of interest. The methods are based on a completely automatic gridding procedure of the image.

Patent
Cha Zhang1, Jin Li1, Yunnan Wu1
18 Oct 2004
TL;DR: In this article, a selective cutting and pasting process is used to divide the image data into stripes that are then used to form a set of multi-perspective panoramas.
Abstract: Rebinning methods and arrangements are provided that significantly improve the 3D wavelet compression performance of the image based rendering data, such as, e.g., concentric mosaic image data. Through what is essentially a selective cutting and pasting process the image data is divided into stripes that are then used to form a set of multi-perspective panoramas. The rebinning process greatly improves the performance of the cross shot filtering, and thus improves the transform and coding efficiency of 3D wavelet codecs. While the region of support after rebinning may cease to be rectangular in some cases, a padding scheme and an arbitrary shape wavelet coder can be implemented to encode the result data volume of the smart rebinning. With an arbitrary shape wavelet codec, the rebinning outperforms MPEG-2 by 3.7 dB, outperforms direct 3D wavelet coder by 4.3 dB, and outperforms a reference block coder (RBC) by 3.2 dB on certain tested concentric mosaic image scenes. Hence, the rebinning process nearly quadruples the compression ratio for selected scenes. Additional methods and arrangements are provided that include selectively dividing the image data into slits and rebinning the slits into a huge 2D array, which is then compressed using conventional still image codecs, such as, JPEG.

Proceedings ArticleDOI
22 Jun 2004
TL;DR: The proposed methods are the first examples of lossless embedding methods that preserve the file size for image formats that use lossless compression.
Abstract: In lossless watermarking, it is possible to completely remove the embedding distortion from the watermarked image and recover an exact copy of the original unwatermarked image. Lossless watermarks found applications in fragile authentication, integrity protection, and metadata embedding. It is especially important for medical and military images. Frequently, lossless embedding disproportionably increases the file size for image formats that contain lossless compression (RLE BMP, GIF, JPEG, PNG, etc...). This partially negates the advantage of embedding information as opposed to appending it. In this paper, we introduce lossless watermarking techniques that preserve the file size. The formats addressed are RLE encoded bitmaps and sequentially encoded JPEG images. The lossless embedding for the RLE BMP format is designed in such a manner to guarantee that the message extraction and original image reconstruction is insensitive to different RLE encoders, image palette reshuffling, as well as to removing or adding duplicate palette colors. The performance of both methods is demonstrated on test images by showing the capacity, distortion, and embedding rate. The proposed methods are the first examples of lossless embedding methods that preserve the file size for image formats that use lossless compression.

Proceedings ArticleDOI
A. Al1, B.P. Rao1, Sudhir S. Kudva1, S. Babu, D. Sumam, Ajit V. Rao 
01 Jan 2004
TL;DR: This paper investigates the scope of the intraframe coder of H.264 for image coding and compares the quality and the complexity of its decoder with the commonly used image codecs (JPEG and JPEG2000).
Abstract: The recently proposed H.264 video coding standard offers significant coding gains over previously defined standards. An enhanced intra-frame prediction algorithm has been proposed in H.264 for efficient compression of I-frames. This paper investigates the scope of the intraframe coder of H.264 for image coding. We compare the quality of this coder and the complexity of its decoder with the commonly used image codecs (JPEG and JPEG2000). Our results demonstrate that H.264 has a strong potential as an alternative to JPEG and JPEG2000.

Journal ArticleDOI
TL;DR: It will be shown that the encoding-decoding process results in a nonhomogeneous image degradation in the geometrically corrected image, due to the different amount of information associated to each pixel.
Abstract: The paper presents an analysis of the effects of lossy compression algorithms applied to images affected by geometrical distortion. It will be shown that the encoding-decoding process results in a nonhomogeneous image degradation in the geometrically corrected image, due to the different amount of information associated to each pixel. A distortion measure named quadtree distortion map (QDM) able to quantify this aspect is proposed. Furthermore, QDM is exploited to achieve adaptive compression of geometrically distorted pictures, in order to ensure a uniform quality on the final image. Tests are performed using JPEG and JPEG2000 coding standards in order to quantitatively and qualitatively assess the performance of the proposed method.

Proceedings ArticleDOI
14 Sep 2004
TL;DR: The experimental results show that the proposed steganographic method can provide a high information hiding capacity and successfully control the compression ratio and distortion of the stego-image.
Abstract: In this paper, a novel steganographic method based on JPEG is proposed. We take advantage of the quantization error resulting from processing the JPEG-compressed image with two different scaling factors. One of the scaling factors is used to control the bit rate of the stego-image while the other is used to guarantee the quality of the stego-image. Our experimental results show that the proposed steganographic method can provide a high information hiding capacity and successfully control the compression ratio and distortion of the stego-image.

Proceedings ArticleDOI
01 Jan 2004
TL;DR: Experiments show that the performance of JPEG with the integer reversible DCT is very close to that of the original standard JPEG for lossy image coding, and more importantly, with the transform, it can compress images losslessly.
Abstract: JPEG, as an international image coding standard based on DCT and Huffman entropy coder, is still popular in image compression applications although it is lossy JPEG-LS, standardized for lossless image compression, however, employs an encoding technique different from JPEG This paper presents an integer reversible implementation to make JPEG lossless It uses the framework of JPEG, and just converts DCT and color transform to be integer reversible Integer DCT is implemented by factoring the float DCT matrix into a series of elementary reversible matrices and each of them is directly integer reversible Our integer DCT integrates lossy and lossless schemes nicely, and it supports both lossy and lossless compression by the same method Our JPEG can be used as a replacement for the standard JPEG in either encoding or decoding or both Experiments show that the performance of JPEG with our integer reversible DCT is very close to that of the original standard JPEG for lossy image coding, and more importantly, with our transform, it can compress images losslessly

Journal ArticleDOI
TL;DR: A new method of feature extraction is proposed in order to improve the efficiency of retrieving Joint Photographic Experts Group (JPEG) compressed images and will give each retrieved image a rank to define its similarity to the query image.

Proceedings ArticleDOI
24 Oct 2004
TL;DR: This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner and presents an unequal power allocation scheme as a simple application of the model.
Abstract: The need for efficient joint source-channel coding is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical joint source-channel coding schemes is a distortion model to measure the quality of compressed digital multimedia such as images and videos. Unfortunately, models for estimating the distortion due to quantization and channel bit errors in a combined fashion do not appear to be available for practical image or video coding standards. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to both quantization and channel bit errors. Important compression techniques such as Huffman coding, DPCM coding, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal to noise ratio can be predicted within a 2 dB maximum error.