scispace - formally typeset
Search or ask a question

Showing papers on "Quantization (image processing) published in 2007"


Proceedings ArticleDOI
17 Jun 2007
TL;DR: To improve query performance, this work adds an efficient spatial verification stage to re-rank the results returned from the bag-of-words model and shows that this consistently improves search quality, though by less of a margin when the visual vocabulary is large.
Abstract: In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora.

3,242 citations


Journal ArticleDOI
TL;DR: Unlike the other existing chaos-based pseudo-random number generators, the proposed keystream generator not only achieves a very fast throughput, but also passes the statistical tests of up-to-date test suite even under quantization.
Abstract: In this paper, a fast chaos-based image encryption system with stream cipher structure is proposed. In order to achieve a fast throughput and facilitate hardware realization, 32-bit precision representation with fixed point arithmetic is assumed. The major core of the encryption system is a pseudo-random keystream generator based on a cascade of chaotic maps, serving the purpose of sequence generation and random mixing. Unlike the other existing chaos-based pseudo-random number generators, the proposed keystream generator not only achieves a very fast throughput, but also passes the statistical tests of up-to-date test suite even under quantization. The overall design of the image encryption system is to be explained while detail cryptanalysis is given and compared with some existing schemes.

425 citations


Proceedings ArticleDOI
02 Jul 2007
TL;DR: A passive approach to detect digital forgeries by checking inconsistencies of blocking artifact based on the estimated quantization table using the power spectrum of the DCT coefficient histogram is described.
Abstract: Digital images can be forged easily with today's widely available image processing software. In this paper, we describe a passive approach to detect digital forgeries by checking inconsistencies of blocking artifact. Given a digital image, we find that the blocking artifacts introduced during JPEG compression could be used as a "natural authentication code". A blocking artifact measure is then proposed based on the estimated quantization table using the power spectrum of the DCT coefficient histogram. Experimental results also demonstrate the validity of the proposed approach.

264 citations


Proceedings ArticleDOI
15 Apr 2007
TL;DR: A novel method for the detection of image tampering operations in JPEG images by exploiting the blocking artifact characteristics matrix (BACM) to train a support vector machine (SVM) classifier for recognizing whether an image is an original JPEG image or it has been cropped from another JPEG image and re-saved as a JPEG image.
Abstract: One of the most common practices in image tampering involves cropping a patch from a source and pasting it onto a target. In this paper, we present a novel method for the detection of such tampering operations in JPEG images. The lossy JPEG compression introduces inherent blocking artifacts into the image and our method exploits such artifacts to serve as a 'watermark' for the detection of image tampering. We develop the blocking artifact characteristics matrix (BACM) and show that, for the original JPEG images, the BACM exhibits regular symmetrical shape; for images that are cropped from another JPEG image and re-saved as JPEG images, the regular symmetrical property of the BACM is destroyed. We fully exploit this property of the BACM and derive representation features from the BACM to train a support vector machine (SVM) classifier for recognizing whether an image is an original JPEG image or it has been cropped from another JPEG image and re-saved as a JPEG image. We present experiment results to show the efficacy of our method.

197 citations


Journal ArticleDOI
Xiaojun Qi1, Ji Qi1
TL;DR: This paper presents a content-based digital image-watermarking scheme, which is robust against a variety of common image-processing attacks and geometric distortions and yields a better performance as compared with some peer systems in the literature.

107 citations


Journal ArticleDOI
01 Feb 2007
TL;DR: A novel evolutionary method called evolutionary group algorithm (EGA) is proposed for complicated time-consuming optimization problems such as finding optimal parameters of content-based image indexing algorithms.
Abstract: Optimization of content-based image indexing and retrieval (CBIR) algorithms is a complicated and time-consuming task since each time a parameter of the indexing algorithm is changed, all images in the database should be indexed again. In this paper, a novel evolutionary method called evolutionary group algorithm (EGA) is proposed for complicated time-consuming optimization problems such as finding optimal parameters of content-based image indexing algorithms. In the new evolutionary algorithm, the image database is partitioned into several smaller subsets, and each subset is used by an updating process as training patterns for each chromosome during evolution. This is in contrast to genetic algorithms that use the whole database as training patterns for evolution. Additionally, for each chromosome, a parameter called age is defined that implies the progress of the updating process. Similarly, the genes of the proposed chromosomes are divided into two categories: evolutionary genes that participate to evolution and history genes that save previous states of the updating process. Furthermore, a new fitness function is defined which evaluates the fitness of the chromosomes of the current population with different ages in each generation. We used EGA to optimize the quantization thresholds of the wavelet-correlogram algorithm for CBIR. The optimal quantization thresholds computed by EGA improved significantly all the evaluation measures including average precision, average weighted precision, average recall, and average rank for the wavelet-correlogram method

89 citations


Journal ArticleDOI
TL;DR: The experimental results show that the proposed method for embedding a color or a grayscale image in a true color image is a secure steganographic method that provides high hiding capacity and good image quality.

80 citations


Journal ArticleDOI
TL;DR: This paper shows how to design and implement a novel efficient space-frequency quantization (SFQ) compression algorithm using directionlets and shows that the new compression method outperforms the standard SFQ in a rate-distortion sense, both in terms of mean-square error and visual quality, especially in the low-rate compression regime.
Abstract: The standard separable 2-D wavelet transform (WT) has recently achieved a great success in image processing because it provides a sparse representation of smooth images. However, it fails to efficiently capture 1-D discontinuities, like edges or contours. These features, being elongated and characterized by geometrical regularity along different directions, intersect and generate many large magnitude wavelet coefficients. Since contours are very important elements in the visual perception of images, to provide a good visual quality of compressed images, it is fundamental to preserve good reconstruction of these directional features. In our previous work, we proposed a construction of critically sampled perfect reconstruction transforms with directional vanishing moments imposed in the corresponding basis functions along different directions, called directionlets. In this paper, we show how to design and implement a novel efficient space-frequency quantization (SFQ) compression algorithm using directionlets. Our new compression method outperforms the standard SFQ in a rate-distortion sense, both in terms of mean-square error and visual quality, especially in the low-rate compression regime. We also show that our compression method, does not increase the order of computational complexity as compared to the standard SFQ algorithm.

72 citations


Journal ArticleDOI
Sha Wang1, Dong Zheng1, Jiying Zhao1, Wa James Tam, Filippo Speranza 
TL;DR: A digital watermarking-based image quality evaluation method that can accurately estimate image quality in terms of the classical objective metrics, such as peak signal-to-noise ratio, weighted PSNR, and Watson just noticeable difference (JND), without the need for the original image.
Abstract: As a practical and novel application of watermarking, this paper presents a digital watermarking-based image quality evaluation method that can accurately estimate image quality in terms of the classical objective metrics, such as peak signal-to-noise ratio (PSNR), weighted PSNR (wPSNR), and Watson just noticeable difference (JND), without the need for the original image. In this method, a watermark is embedded into the discrete wavelet transform (DWT) domain of the original image using a quantization method. Considering that different images have different frequency distributions, the vulnerability of the watermark for the image is adjusted using automatic control. After the auto-adjustment, the degradation of the extracted watermark can be used to estimate image quality in terms of the classical metrics with high accuracy. We calculated PSNR, wPSNR, and Watson JND quality measures for JPEG compressed images and compared the values with those estimated using the watermarking-based approach. We found that the calculated and estimated measures of quality to be highly correlated, suggesting that the proposed method can provide accurate measures of image quality under JPEG compression. Furthermore, given the similarity between JPEG and MPEG-2, this achievement has paved the road for the practical and accurate quality evaluation of MPEG-2 compressed video. We believe that this achievement is of great importance to video broadcasting

66 citations


Book ChapterDOI
TL;DR: The chapter presents several examples for understanding the imaging process as a transformation from sample to image and the limits and considerations of quantitative analysis and the concept of digitally correcting the images.
Abstract: Publisher Summary The chapter discusses quantitative analysis of digital microscope images and presents several exercises to provide examples to explain the concept. The chapter also presents the basic concepts in quantitative analysis for imaging, but these concepts rest on a well-established foundation of signal theory and quantitative data analysis. The chapter presents several examples for understanding the imaging process as a transformation from sample to image and the limits and considerations of quantitative analysis. The chapter introduces to the concept of digitally correcting the images and also focuses on some of the more critical types of data transformation and some of the frequently encountered issues in quantization. Image processing represents a form of data processing. There are many examples of data processing such as fitting the data to a theoretical curve. In all these cases, it is critical that care is taken during all steps of transformation, processing, and quantization.

63 citations


Journal ArticleDOI
TL;DR: A bit-rate estimation function and a distortion measure are proposed by modeling the transform coefficients with spatial-domain variance and can reduce the computation complexity for the R-D optimized mode-decision by using the new cost function evolving from the simplified transform-domain R- D model.
Abstract: The rate-distortion (R-D) optimization technique plays an important role in optimizing video encoders. Modeling the rate and distortion functions accurately in acceptable complexity helps to make the optimization more practical. In this paper, we propose a bit-rate estimation function and a distortion measure by modeling the transform coefficients with spatial-domain variance. Furthermore, with quantization-based thresholding to determine the number, the absolute sum, and the squared sum of transform coefficients, the simplified transform-domain R-D measurement is introduced. The proposed algorithms can reduce the computation complexity for the R-D optimized mode-decision by using the new cost function evolving from the simplified transform-domain R-D model. Based on the proposed estimations, a rate-control scheme in the macroblock layer is also proposed to improve the coding efficiency

Journal ArticleDOI
TL;DR: An adaptive quantization scheme based on boundary adaptation procedure followed by an online quadrant tree decomposition processing enabling low power and yet robust and compact image compression processor integrated together with a digital CMOS image sensor is proposed.
Abstract: The recent emergence of new applications in the area of wireless video sensor network and ultra-low-power biomedical applications (such as the wireless camera pill) have created new design challenges and frontiers requiring extensive research work. In such applications, it is often required to capture a large amount of data and process them in real time while the hardware is constrained to take very little physical space and to consume very little power. This is only possible using custom single-chip solutions integrating image sensor and hardware-friendly image compression algorithms. This paper proposes an adaptive quantization scheme based on boundary adaptation procedure followed by an online quadrant tree decomposition processing enabling low power and yet robust and compact image compression processor integrated together with a digital CMOS image sensor. The image sensor chip has been implemented using 0.35-mum CMOS technology and operates at 3.3 V. Simulation and experimental results show compression figures corresponding to 0.6-0.8 bit per pixel, while maintaining reasonable peak signal-to-noise ratio levels and very low operating power consumption. In addition, the proposed compression processor is expected to benefit significantly from higher resolution and Megapixels CMOS imaging technology

Patent
09 Apr 2007
TL;DR: In this article, a video encoder identifies portions of a video picture that contain DC shift blocks and adjusts quantization (e.g., by selecting a smaller quantization step size) to reduce contouring artifacts when the picture is reconstructed.
Abstract: A video encoder identifies one or more portions of a video picture that contain DC shift blocks and adjusts quantization (e.g., by selecting a smaller quantization step size) to reduce contouring artifacts when the picture is reconstructed. The encoder can identify the portion(s) of the picture that contain DC shift blocks by identifying one or more gradient slope regions in the picture and analyzing quantization effects on DC coefficients in the gradient slope region(s). The encoder can select a coarser quantization step size for a high-texture picture portion.

Journal ArticleDOI
TL;DR: A fragile watermarking scheme in the wavelet transform domain that is sensitive to all kinds of manipulations and has the ability to localize the tampered regions and put up resistance to the so-called vector quantization attack, Holliman-Memon attack, collage attack, and transplantation attack is proposed.
Abstract: We propose a fragile watermarking scheme in the wavelet transform domain that is sensitive to all kinds of manipulations and has the ability to localize the tampered regions. To achieve high transparency (i.e., low embedding distortion) while providing protection to all coefficients, the embedder involves all the coefficients within a hierarchical neighborhood of each sparsely selected watermarkable coefficient during the watermark embedding process. The way the nonwatermarkable coefficients are involved in the embedding process is content-dependent and nondeterministic, which allows the proposed scheme to put up resistance to the so-called vector quantization attack, Holliman-Memon attack, collage attack, and transplantation attack.

Journal ArticleDOI
TL;DR: A fast palette design scheme based on the K-means algorithm for color image quantization that consumes a lower computational cost than those comparative schemes while keeping approximately the same image quality.
Abstract: We propose a fast palette design scheme based on the K-means algorithm for color image quantization. To accelerate the K-means algorithm for palette design, the use of stable flags for palette entries is introduced. If the squared Euclidean distances incurred by the same palette entry in two successive rounds are quite similar, the palette entry is classified to be stable. The clustering process will not work on these stable palette entries to cut down the required computational cost. The experimental results reveal that the proposed algorithm consumes a lower computational cost than those comparative schemes while keeping approximately the same image quality.

Patent
20 Mar 2007
TL;DR: In this paper, a finer quantization control according to the property of an image within a macro-block is performed by selecting fine and coarse quantization parameters respectively for corresponding sub-blocks if a plurality of images having different properties coexist within the macroblock.
Abstract: To allow a finer quantization control according to the property of an image within a macroblock, quantization parameter values are allowed to be changed in units of sub-blocks equal to or smaller than the macroblock in a similar manner as in motion compensation and orthogonal transform processes. A finer-tuned quantization control is performed, for example, by selecting fine and coarse quantization parameters respectively for corresponding sub-blocks if a plurality of images having different properties coexist within the macroblock.

Journal ArticleDOI
TL;DR: It is found that the image contrast and the average gray level play important roles in image compression and quality evaluation and in the future, the image gray level and contrast effect should be considered in developing new objective metrics.
Abstract: Previous studies have shown that Joint Photographic Experts Group (JPEG) 2000 compression is better than JPEG at higher compression ratio levels. However, some findings revealed that this is not valid at lower levels. In this study, the qualities of compressed medical images in these ratio areas (∼20), including computed radiography, computed tomography head and body, mammographic, and magnetic resonance T1 and T2 images, were estimated using both a pixel-based (peak signal to noise ratio) and two 8 × 8 window-based [Q index and Moran peak ratio (MPR)] metrics. To diminish the effects of blocking artifacts from JPEG, jump windows were used in both window-based metrics. Comparing the image quality indices between jump and sliding windows, the results showed that blocking artifacts were produced from JPEG compression, even at low compression ratios. However, even after the blocking artifacts were omitted in JPEG compressed images, JPEG2000 outperformed JPEG at low compression levels. We found in this study that the image contrast and the average gray level play important roles in image compression and quality evaluation. There were drawbacks in all metrics that we used. In the future, the image gray level and contrast effect should be considered in developing new objective metrics.

Journal ArticleDOI
TL;DR: A hybrid scheme using both discrete wavelet transform (DWT) and discrete cosine Transform (DCT) for medical image compression is presented, achieving better compression than obtained from either technique alone.
Abstract: With the development of communication technology the applications and services of health telemetics are growing. In view of the increasingly important role played by digital medical imaging in modern health care, it is necessary for large amount of image data to be economically stored and/or transmitted. There is a need for the development of image compression systems that combine high compression ratio with preservation of critical information. During the past decade wavelets have been a significant development in the field of image compression. In this paper, a hybrid scheme using both discrete wavelet transform (DWT) and discrete cosine transform (DCT) for medical image compression is presented. DCT is applied to the DWT details, which generally have zero mean and small variance, thereby achieving better compression than obtained from either technique alone. The results of the hybrid scheme are compared with JPEG and set partitioning in hierarchical trees (SPIHT) coder and it is found that the performa...

Journal ArticleDOI
TL;DR: Noise shaping reduces power dissipation below that of a conventional digital imager while the need for a peripheral DSP is eliminated.
Abstract: Image compression algorithms employ computationally expensive spatial convolutional transforms. The CMOS image sensor performs spatially compressing image quantization on the focal plane yielding digital output at a rate proportional to the mere information rate of the video. A bank of column-parallel first-order incremental DeltaSigma-modulated analog-to-digital converters (ADCs) performs column-wise distributed focal-plane oversampling of up to eight adjacent pixels and concurrent weighted average quantization. Number of samples per pixel and switched-capacitor sampling sequence order set the amplitude and sign of the pixel coefficient, respectively. A simple digital delay and adder loop performs spatial accumulation over up to eight adjacent ADC outputs during readout. This amounts to computing a two-dimensional block matrix transform with up to 8times8-pixel programmable kernel in parallel for all columns. Noise shaping reduces power dissipation below that of a conventional digital imager while the need for a peripheral DSP is eliminated. A 128times128 active pixel array integrated with a bank of 128 DeltaSigma-modulated ADCs was fabricated in a 0.35-mum CMOS technology. The 3.1 mm times 1.9-mm prototype captures 8-bit digital video at 30 frames/s and yields 4 GMACS projected computational throughput when scaled to HDTV 1080i resolution in discrete cosine transform (DCT) compression

Patent
Cheng Chang1, Chih-Lung Lin1
05 Jun 2007
TL;DR: In this article, a video encoder adaptively selects a delta QP for a B-picture based on spatial complexity, temporal complexity, whether differential quantization is active, whether the B- picture is available as a reference picture, or some combination or subset of these or other factors.
Abstract: Techniques and tools for adaptive selection of picture quantization parameters (“QPs”) for predicted pictures are described. For example, a video encoder adaptively selects a delta QP for a B-picture based on spatial complexity, temporal complexity, whether differential quantization is active, whether the B-picture is available as a reference picture, or some combination or subset of these or other factors. The delta QP can then be used to adjust the picture QP for the B-picture (e.g., to reduce bit rate for the B-picture without appreciably reducing the perceived quality of a video sequence.

Journal ArticleDOI
TL;DR: This work presents an optical adaptation of the method of JPEG compression technique for binary, gray-level and color images by using the principle of coherent optics to improve the quality of the compressed image while minimizing the time required for compression.

Journal ArticleDOI
TL;DR: The improved rate-control scheme has significantly increased the average peak signal-to-noise ratio up to 1.53 dB, reduced the variation of buffer level, improved the perpetual quality of the reconstructed video and reduced the computation complexity.
Abstract: This paper points out some defects in the techniques used in H.264 rate control and presents several new algorithms to improve them. The improved algorithm has the following main features: 1) the bits allocated to each P-frame is proportional to the local motion in it, i.e, more bits are allocated to a frame if the local motion in it is stronger; 2) the quantization parameter for I-frame is choosed based on a new bits allocation scheme for I-frame; 3)the quantization parameter calculation is based on a simple encoding complexity prediction scheme, which is more robust and of less complexity than the quadratic model used by H.264 in low bit rate video coding. Experimental results and analysis show that the improved rate-control scheme has significantly increased the average peak signal-to-noise ratio up to 1.53 dB, reduced the variation of buffer level, improved the perpetual quality of the reconstructed video and reduced the computation complexity.

Journal ArticleDOI
TL;DR: The proposed FSSD algorithm is based on the theoretical equivalent of the SSDs in spatial and transform domains and determines the distortion in integer cosine transform domain using an iterative table-lookup quantization process and could avoid the inverse quantization/transform and pixel reconstructions processes with nearly no rate-distortion performance degradation.
Abstract: In H.264/AVC, the rate-distortion optimization for mode decision plays a significant role to achieve its outstanding performance in terms of both compression efficiency and video quality. However, this mode decision process also introduces extremely high complexity in the encoding process especially the computation of the sum of squared differences (SSD) between the original and reconstructed image blocks. In this paper, fast SSD (FSSD) algorithms are proposed to reduce the complexity of the rate-distortion cost function implementation. The proposed FSSD algorithm is based on the theoretical equivalent of the SSDs in spatial and transform domains and determines the distortion in integer cosine transform domain using an iterative table-lookup quantization process. This approach could avoid the inverse quantization/transform and pixel reconstructions processes with nearly no rate-distortion performance degradation. In addition, the FSSD can also be used with efficient bit rate estimation algorithms to further reduce the cost function complexity. Experimental results show that the new FSSD can save up to 15% of total encoding time with less than 0.1% coding performance degradation and it can save up to 30% with ignorable performance degradation when combining with conventional bit rate estimation algorithm

Journal ArticleDOI
Yair Wiseman1
TL;DR: A replacement of the traditional Huffman compression used by JPEG by the Burrows-Wheeler compression will yield a better compression ratio, and if the image is synthetic, even a poor quality image can be compressed better.
Abstract: Recently, the use of the Burrows-Wheeler method for data compression has been expanded. A method of enhancing the compression efficiency of the common JPEG standard is presented in this paper, exploiting the Burrows-Wheeler compression technique. The paper suggests a replacement of the traditional Huffman compression used by JPEG by the Burrows-Wheeler compression. When using high quality images, this replacement will yield a better compression ratio. If the image is synthetic, even a poor quality image can be compressed better.

Journal ArticleDOI
TL;DR: This paper proposes a mesh streaming method based on JPEG 2000 standard and integrates it into an existed multimedia streaming server, so that this method can directly benefit from current image and video streaming technologies.
Abstract: For PC and even mobile device, video and image streaming technologies, such as H.264 and JPEG/JPEG 2000, are already mature. However, the streaming technology for 3D model or so-called mesh data is still far from practical use. Therefore, in this paper, we propose a mesh streaming method based on JPEG 2000 standard and integrate it into an existed multimedia streaming server, so that our mesh streaming method can directly benefit from current image and video streaming technologies. In this method, the mesh data of a 3D model is first converted into a JPEG 2000 image, and then based on the JPEG 2000 streaming technique, the mesh data can then be transmitted over the Internet as a mesh streaming. Furthermore, we also extend this mesh streaming method for deforming meshes as the extension from a JPEG 2000 image to a motion JPEG 2000 video, so that our mesh streaming method is not only for transmitting a static 3D model but also a 3D animation model. To increase the usability of our method, the mesh stream can also be inserted into a X3D scene as an extension node of X3D. Moreover, since this method is based on the JPEG 2000 standard, our system is much suitable to be integrated into any existed client-server or peer-to-peer multimedia streaming system.

Proceedings ArticleDOI
28 May 2007
TL;DR: Synthetic speckle images generated by this method are visually and theoretically very close to real ultrasonograms.
Abstract: This paper introduces a novel method to simulate B-mode medical ultrasound speckle in synthetic images. Our approach takes into account both the ultrasound image formation model and the speckle formation model. The algorithm first modifies the geometry of an ideal noiseless image to match that of a sectoral B-mode ultrasonogram, by subsampling a grid of pixels to simulate the acquisition and quantization steps of image formation. Then, speckle is added by simulating a random walk in the plane of the complex amplitude, according to the Burckhardt speckle formation model. We finally interpolate the noisy subsampled pixels in order to fill the space introduced by the sampling step and recover a complete image, as would a real ultrasonograph. Synthetic speckle images generated by this method are visually and theoretically very close to real ultrasonograms.

Journal ArticleDOI
TL;DR: It is shown that JPEG-based PQ data hiding distorts linear dependencies of rows/columns of pixel values, and proposed features can be exploited within a simple classifier for the steganalysis of PQ.
Abstract: Perturbed quantization (PQ) data hiding is almost undetectable with the current steganalysis methods. We briefly describe PQ and propose singular value decomposition (SVD)-based features for the steganalysis of JPEG-based PQ data hiding in images. We show that JPEG-based PQ data hiding distorts linear dependencies of rows/columns of pixel values, and proposed features can be exploited within a simple classifier for the steganalysis of PQ. The proposed steganalyzer detects PQ embedding on relatively smooth stego images with 70% detection accuracy on average for different embedding rates

Proceedings ArticleDOI
15 Apr 2007
TL;DR: This work shows how a perceptual model that scales linearly with amplitude scaling can be used to provide robustness to amplitude scaling, reduce the perceptual distortion at the embedder and significantly improve the robustness of re-quantization.
Abstract: Spread transform dither modulation (STDM) is a form of quantization index modulation (QIM) that is more robust to re-quantization. However, the robustness of STDM to JPEG compression is still very poor and it remains very sensitive to amplitude scaling. Here, we show how a perceptual model that scales linearly with amplitude scaling can be used to (i) provide robustness to amplitude scaling, (ii) reduce the perceptual distortion at the embedder and (iii) significantly improve the robustness to re-quantization.

Journal ArticleDOI
TL;DR: A comparison of subjective image quality between JPEG and JPEG 2000 to establish whether JPEG 2000 does indeed demonstrate significant improvements in visual quality, and a particular focus of this work is the inherent scene dependency of the two algorithms and their influence on subjective imagequality results.
Abstract: The original JPEG compression standard is efficient at low to medium levels of compression with relatively low levels of loss in visual image quality and has found widespread use in the imaging industry. Excessive compression using JPEG however, results in well-known artifacts such as "blocking" and "ringing," and the variation in image quality as a result of differing scene content is well documented. JPEG 2000 has been developed to improve on JPEG in terms of functionality and image quality at lower bit rates. One of the more fundamental changes is the use of a discrete wavelet transform instead of a discrete cosine transform, which provides several advantages both in terms of the way in which the image is encoded and overall image quality. This study involves a comparison of subjective image quality between JPEG and JPEG 2000 to establish whether JPEG 2000 does indeed demonstrate significant improvements in visual quality. A particular focus of this work is the inherent scene dependency of the two algorithms and their influence on subjective image quality results. Further work on the characterization of scene content is carried out in a connected study [S. Triantaphillidou, E. Allen, and R. E. Jacobson, "Image quality comparison between JPEG and JPEG2000. II. Scene dependency, scene analysis, and classification"

Proceedings ArticleDOI
25 Apr 2007
TL;DR: A quantitative comparison between the energy costs associated with direct transmission of uncompressed images and sensor platform-based JPEG compression followed by transmission of the compressed image data is presented.
Abstract: Energy-efficient image communication is one of the most important goals for a large class of current and future sensor network applications. This paper presents a quantitative comparison between the energy costs associated with 1) direct transmission of uncompressed images and 2) sensor platform-based JPEG compression followed by transmission of the compressed image data. JPEG compression computations are mapped onto various resource-constrained sensor platforms using a design environment that allows computation using the minimum integer and fractional bit-widths needed in view of other approximations inherent in the compression process and choice of image quality parameters. Detailed experimental results examining the tradeoffs in processor resources, processing/transmission time, bandwidth utilization, image quality, and overall energy consumption are presented.