scispace - formally typeset
Search or ask a question

Showing papers on "Quantization (image processing) published in 2010"


Book ChapterDOI
05 Sep 2010
TL;DR: It is shown that the new QC members outperform state of the art distances for these tasks, while having a short running time, and the experimental results show that both the cross-bin property and the normalization are important.
Abstract: We present a new histogram distance family, the Quadratic-Chi (QC) QC members are Quadratic-Form distances with a cross-bin χ2-like normalization The cross-bin χ2-like normalization reduces the effect of large bins having undo influence Normalization was shown to be helpful in many cases, where the χ2 histogram distance outperformed the L2 norm However, χ2 is sensitive to quantization effects, such as caused by light changes, shape deformations etc The Quadratic-Form part of QC members takes care of cross-bin relationships (eg red and orange), alleviating the quantization problem We present two new crossbin histogram distance properties: Similarity-Matrix-Quantization-Invariance and Sparseness-Invariance and show that QC distances have these propertiesWe also show that experimentally they boost performance QC distances computation time complexity is linear in the number of non-zero entries in the bin-similarity matrix and histograms and it can easily be parallelizedWe present results for image retrieval using the Scale Invariant Feature Transform (SIFT) and color image descriptors In addition, we present results for shape classification using Shape Context (SC) and Inner Distance Shape Context (IDSC) We show that the new QC members outperform state of the art distances for these tasks, while having a short running time The experimental results show that both the cross-bin property and the normalization are important

273 citations


Journal ArticleDOI
TL;DR: The new JPEG error analysis method can reliably detect JPEG image blocks which are as small as 8 × 8 pixels and compressed with quality factors as high as 98.5%, important for analyzing and locating small tampered regions within a composite image.
Abstract: JPEG is one of the most extensively used image formats. Understanding the inherent characteristics of JPEG may play a useful role in digital image forensics. In this paper, we introduce JPEG error analysis to the study of image forensics. The main errors of JPEG include quantization, rounding, and truncation errors. Through theoretically analyzing the effects of these errors on single and double JPEG compression, we have developed three novel schemes for image forensics including identifying whether a bitmap image has previously been JPEG compressed, estimating the quantization steps of a JPEG image, and detecting the quantization table of a JPEG image. Extensive experimental results show that our new methods significantly outperform existing techniques especially for the images of small sizes. We also show that the new method can reliably detect JPEG image blocks which are as small as 8 × 8 pixels and compressed with quality factors as high as 98. This performance is important for analyzing and locating small tampered regions within a composite image.

260 citations


Journal ArticleDOI
TL;DR: Simulation results demonstrate that the embedded watermarks can be almost fully extracted from images compressed with very high compression ratio.

207 citations


Journal ArticleDOI
TL;DR: A foveation model as well as a foveated JND (FJND) model in which the spatial and temporal JND models are enhanced to account for the relationship between visibility and eccentricity is described.
Abstract: Traditional video compression methods remove spatial and temporal redundancy based on the signal statistical correlation. However, to reach higher compression ratios without perceptually degrading the reconstructed signal, the properties of the human visual system (HVS) need to be better exploited. Research effort has been dedicated to modeling the spatial and temporal just-noticeable-distortion (JND) based on the sensitivity of the HVS to luminance contrast, and accounting for spatial and temporal masking effects. This paper describes a foveation model as well as a foveated JND (FJND) model in which the spatial and temporal JND models are enhanced to account for the relationship between visibility and eccentricity. Since the visual acuity decreases when the distance from the fovea increases, the visibility threshold increases with increased eccentricity. The proposed FJND model is then used for macroblock (MB) quantization adjustment in H.264/advanced video coding (AVC). For each MB, the quantization parameter is optimized based on its FJND information. The Lagrange multiplier in the rate-distortion optimization is adapted so that the MB noticeable distortion is minimized. The performance of the FJND model has been assessed with various comparisons and subjective visual tests. It has been shown that the proposed FJND model can increase the visual quality versus rate performance of the H.264/AVC video coding scheme.

194 citations


Journal ArticleDOI
TL;DR: A novel Chinese Remainder Theorem (CRT)-based technique for digital watermarking in the Discrete Cosine Transform (DCT) domain that is robust to several common attacks is proposed and compared with recently proposed Singular Value Decomposition (SVD)-based and spatial CRT-based watermarked schemes.

187 citations


Journal ArticleDOI
TL;DR: This algorithm is based on the observation that in the process of recompressing a JPEG image with the same quantization matrix over and over again, the number of different JPEG coefficients will monotonically decrease in general.
Abstract: Detection of double joint photographic experts group (JPEG) compression is of great significance in the field of digital forensics. Some successful approaches have been presented for detecting double JPEG compression when the primary compression and the secondary compression have different quantization matrixes. However, when the primary compression and the secondary compression have the same quantization matrix, no detection method has been reported yet. In this paper, we present a method which can detect double JPEG compression with the same quantization matrix. Our algorithm is based on the observation that in the process of recompressing a JPEG image with the same quantization matrix over and over again, the number of different JPEG coefficients, i.e., the quantized discrete cosine transform coefficients between the sequential two versions will monotonically decrease in general. For example, the number of different JPEG coefficients between the singly and doubly compressed images is generally larger than the number of different JPEG coefficients between the corresponding doubly and triply compressed images. Via a novel random perturbation strategy implemented on the JPEG coefficients of the recompressed test image, we can find a “proper” randomly perturbed ratio. For different images, this universal “proper” ratio will generate a dynamically changed threshold, which can be utilized to discriminate the singly compressed image and doubly compressed image. Furthermore, our method has the potential to detect triple JPEG compression, four times JPEG compression, etc.

171 citations


Journal ArticleDOI
TL;DR: This work proposes an approach to perform lossy compression on single node based on a differential pulse code modulation scheme with quantization of the differences between consecutive samples, and discusses how this approach outperforms LTC, a lossy compressed algorithm purposely designed to be embedded in sensor nodes, in terms of compression rate and complexity.

115 citations


Proceedings ArticleDOI
14 Mar 2010
TL;DR: It is shown how the proper addition of noise to an image's discrete cosine transform coefficients can sufficiently remove quantization artifacts which act as indicators of JPEG compression while introducing an acceptable level of distortion.
Abstract: The widespread availability of photo editing software has made it easy to create visually convincing digital image forgeries. To address this problem, there has been much recent work in the field of digital image forensics. There has been little work, however, in the field of anti-forensics, which seeks to develop a set of techniques designed to fool current forensic methodologies. In this work, we present a technique for disguising an image's JPEG compression history. An image's JPEG compression history can be used to provide evidence of image manipulation, supply information about the camera used to generate an image, and identify forged regions within an image. We show how the proper addition of noise to an image's discrete cosine transform coefficients can sufficiently remove quantization artifacts which act as indicators of JPEG compression while introducing an acceptable level of distortion. Simulation results are provided to verify the efficacy of this anti-forensic technique.

114 citations


Journal ArticleDOI
TL;DR: An effective video watermarking method based on a pseudo-3-D discrete cosine transform (DCT) and quantization index modulation (QIM) against several attacks is proposed that can survive filtering, compressions, luminance change, and noise attacks with a good invisibility and robustness.
Abstract: The increasing popularity of the internet means that digital multimedia are transmitted more rapidly and easily. And people are very aware for media ownership. However, digital watermarking is an efficient and promising means to protect intellectual properties. Based on the intellectual property attention in the information era, how to protect the personal ownership is extremely important and a necessary scheme. In this paper, we propose an effective video watermarking method based on a pseudo-3-D discrete cosine transform (DCT) and quantization index modulation (QIM) against several attacks. The watermark is mainly inserted into the uncompressed domain by adjusting the correlation between DCT coefficients of the selected blocks, and the watermark extraction is blind. This approach consists of a pseudo-3-D DCT, watermark embedding, and extraction. A pseudo-3-D DCT, which is taken DCT transformation twice, will be first utilized to calculate the embedding factor and to obtain the useful messages. Using the QIM, we embed the watermark into the quantization regions from the successive raw frames in the uncompressed domain and record the relative information to create a secret embedding key. This secret embedding key will further apply to extraction. Experimental results demonstrate that the proposed method can survive filtering, compressions, luminance change, and noise attacks with a good invisibility and robustness.

103 citations


Journal ArticleDOI
TL;DR: This work shows that combining the LBP difference filters with the GMM-based density estimator outperforms the classical LBP approach and its codebook extensions and extends this texture descriptor to achieve full invariance to rotation.
Abstract: Texture classification generally requires the analysis of patterns in local pixel neighborhoods. Statistically, the underlying processes are comprehensively described by their joint probability density functions (jPDFs). Even for small neighborhoods, however, stable estimation of jPDFs by joint histograms (jHSTs) is often infeasible, since the number of entries in the jHST exceeds by far the number of pixels in a typical texture region. Moreover, evaluation of distance functions between jHSTs is often computationally prohibitive. Practically, the number of entries in a jHST is therefore reduced by considering only two-pixel patterns, leading to 2D-jHSTs known as cooccurrence matrices, or by quantization of the gray levels in local patterns to only two gray levels, yielding local binary patterns (LBPs). Both approaches result in a loss of information. We introduce here a framework for supervised texture classification which reduces or avoids this information loss. Local texture neighborhoods are first filtered by a filter bank. Without further quantization, the jPDF of the filter responses is then described parametrically by Gaussian mixture models (GMMs). We show that the parameters of the GMMs can be reliably estimated from small image regions. Moreover, distances between the thus modelled jPDFs of different texture patterns can be computed efficiently in closed form from their model parameters. We furthermore extend this texture descriptor to achieve full invariance to rotation. We evaluate the framework for different filter banks on the Brodatz texture set. We first show that combining the LBP difference filters with the GMM-based density estimator outperforms the classical LBP approach and its codebook extensions. When replacing these-rather elementary-difference filters by the wavelet frame transform (WFT), the performance of the framework on all 111 Brodatz textures exceeds the one obtained more recently by spin image and RIFT descriptors by Lazebnik et al.

96 citations


Journal ArticleDOI
TL;DR: Experimental results show that the embedded watermark is invisible and robust to attacks and the resilience of the watermarking algorithm against a series nine different attacks for different videos is tested.

Proceedings ArticleDOI
03 Dec 2010
TL;DR: By using an anti-forensic operation capable of removing blocking artifacts from a previously JPEG compressed image, this paper is able to fool forensic methods designed to detect evidence of JPEG compression in decoded images, determine an image's origin, detect double JPEG compression, and identify cut-and-paste image forgeries.
Abstract: Recently, a number of digital image forensic techniques have been developed which are capable of identifying an image's origin, tracing its processing history, and detecting image forgeries. Though these techniques are capable of identifying standard image manipulations, they do not address the possibility that anti-forensic operations may be designed and used to hide evidence of image tampering. In this paper, we propose an anti-forensic operation capable of removing blocking artifacts from a previously JPEG compressed image. Furthermore, we show that by using this operation along with another anti-forensic operation which we recently proposed, we are able to fool forensic methods designed to detect evidence of JPEG compression in decoded images, determine an image's origin, detect double JPEG compression, and identify cut-and-paste image forgeries.

Journal ArticleDOI
TL;DR: A design technique for (near) subthreshold operation that achieves ultra low energy dissipation at throughputs of up to 100 MB/s suitable for digital consumer electronic applications and is largely applicable to designing other sound/graphic and streaming processors.
Abstract: We present a design technique for (near) subthreshold operation that achieves ultra low energy dissipation at throughputs of up to 100 MB/s suitable for digital consumer electronic applications. Our approach employs i) architecture-level parallelism to compensate throughput degradation, ii) a configurable V T balancer to mitigate the V T mismatch of nMOS and pMOS transistors operating in sub/near threshold, and iii) a fingered-structured parallel transistor that exploits V T mismatch to improve current drivability. Additionally, we describe the selection procedure of the standard cells and how they were modified for higher reliability in the subthreshold regime. All these concepts are demonstrated using SubJPEG, a 1.4 ×1.4 mm2 65 nm CMOS standard-V T multi-standard JPEG co-processor. Measurement results of the discrete cosine transform (DCT) and quantization processing engines, operating in the subthreshold regime, show an energy dissipation of only 0.75 pJ per cycle with a supply voltage of 0.4 V at 2.5 MHz. This leads to 8.3× energy reduction when compared to using a 1.2 V nominal supply. In the near-threshold regime the energy dissipation is 1.0 pJ per cycle with a 0.45 V supply voltage at 4.5 MHz. The system throughput can meet 15 fps 640 × 480 pixel VGA standard. Our methodology is largely applicable to designing other sound/graphic and streaming processors.

Book ChapterDOI
25 Apr 2010
TL;DR: A set of domain specific loss-less compression schemes that achieve over 40× compression of fragments, outperforming bzip2 by over 6× are introduced and the study of using ‘lossy' quality values is initiated.
Abstract: With the advent of next generation sequencing technologies, the cost of sequencing whole genomes is poised to go below $1000 per human individual in a few years As more and more genomes are sequenced, analysis methods are undergoing rapid development, making it tempting to store sequencing data for long periods of time so that the data can be re-analyzed with the latest techniques The challenging open research problems, huge influx of data, and rapidly improving analysis techniques have created the need to store and transfer very large volumes of data. Compression can be achieved at many levels, including trace level (compressing image data), sequence level (compressing a genomic sequence), and fragment-level (compressing a set of short, redundant fragment reads, along with quality-values on the base-calls) We focus on fragment-level compression, which is the pressing need today. Our paper makes two contributions, implemented in a tool, SlimGene First, we introduce a set of domain specific loss-less compression schemes that achieve over 40× compression of fragments, outperforming bzip2 by over 6× Including quality values, we show a 5× compression using less running time than bzip2 Second, given the discrepancy between the compression factor obtained with and without quality values, we initiate the study of using ‘lossy' quality values Specifically, we show that a lossy quality value quantization results in 14× compression but has minimal impact on downstream applications like SNP calling that use the quality values Discrepancies between SNP calls made between the lossy and lossless versions of the data are limited to low coverage areas where even the SNP calls made by the lossless version are marginal.

Book ChapterDOI
21 Sep 2010
TL;DR: This paper proposes a method to embed bits into selected components of the compressed data such that it does not require decompression of the JPEG images and introduces very little change to the original JPEG files.
Abstract: When JPEG images are used as cover objects for data hiding, many existing techniques require the images to be fully or partially decompressed before embedding. This makes practical application of these methods limited. In this paper, we investigate ways to hide data in the compressed domain directly and efficiently, such that both the original content and the embedded message can be recovered at the same time during decompression. We propose a method to embed bits into selected components of the compressed data such that it does not require decompression of the JPEG images and introduces very little change to the original JPEG files. The proposed method can be implemented efficiently and it is possible to perform embedding and detection in a single pass, so that JPEG streams can be processed in real-time without waiting for the end of the data.

Patent
05 Oct 2010
TL;DR: In this paper, a system and method for generating a second reduced size digital image from a first digital image was proposed, the method including iteratively compressing the first image to an extent determined by a quality measure comprising at least a blockiness measure quantifying added artifactual edges along coding block boundaries of the second image.
Abstract: A system and method for generating a second reduced size digital image from a first digital image, the method including iteratively compressing the first digital image to an extent determined by a quality measure comprising at least a blockiness measure quantifying added artifactual edges along coding block boundaries of the second image and/or use of a quantization matrix generated by computing a weighted average of the quantization matrix of the first digital image and a scaled second quantization matrix.

Proceedings ArticleDOI
TL;DR: Three features based on the observation that re-quantization induces periodic artifacts and introduces discontinuities in the signal histogram are introduced and a system to detect JPEG re-compression is proposed.
Abstract: Re-quantization commonly occurs when digital multimedia content is being tampered with. Detecting requantization is therefore an important element for assessing the authenticity of digital multimedia content. In this paper, we introduce three features based on the observation that re-quantization (i) induces periodic artifacts and (ii) introduces discontinuities in the signal histogram. After validating the discriminative potential of these features with synthetic signals, we propose a system to detect JPEG re-compression. Both linear (FLD) and non-linear (SVM) classifications are investigated. Experimental results clearly demonstrate the ability of the proposed features to detect JPEG re-compression, as well as their competitiveness compared to prior approaches to achieve the same goal.

Journal ArticleDOI
TL;DR: A numerical approximation method for the discrete cosine transform based on round-off techniques with low arithmetic complexity is introduced, which resulted in comparable or better performances, when compared to the usual DCT-based methodology.
Abstract: Discrete transforms play an important role in digital signal processing. In particular, due to its transform domain energy compaction properties, the discrete cosine transform (DCT) is pivotal in many image processing problems. This paper introduces a numerical approximation method for the DCT based on round-off techniques. The proposed method is a multiplierless technique with low arithmetic complexity. Emphasis was given to approximating the 8-point DCT. A fast algorithm for the introduced 8-point approximate transform was derived. An application in image compression was examined. In several scenarios, the utilization of the proposed method for image compression resulted in comparable or better performances, when compared to the usual DCT-based methodology.

Journal ArticleDOI
TL;DR: This paper presents the criterion satisfied by an optimal transform of a JPEG2000 compatible compression scheme, under high resolution quantization hypothesis and without the Gaussianity assumption, and introduces two variants of the compression scheme and the associated criteria minimized by optimal transforms.

Journal ArticleDOI
TL;DR: Experimental results show that the BPS scheme, aimed at exploiting calibration-induced data correlation, is effective on Airborne Visible/Infrared Imaging Spectrometer 1997 images where such artifacts are significant and outperforms all other schemes under comparison in this category.
Abstract: In this letter, an efficient lossless compression scheme for hyperspectral images is presented. The proposed scheme uses a two-stage predictor. The stage-1 predictor takes advantage of spatial data correlation and formulates the derivation of a spectral domain predictor as a process of Wiener filtering. The stage-2 predictor takes the prediction from the stage-1 predictor as an initial value and conducts a backward pixel search (BPS) scheme on the current band for the final prediction value. Experimental results show that the BPS scheme, aimed at exploiting calibration-induced data correlation, is effective on Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) 1997 images where such artifacts are significant. The proposed work outperforms all other schemes under comparison in this category. For the newer Consultative Committee for Space Data Systems images where calibration-induced artifacts are minimized, the BPS scheme does not help, and the stage-1 predictor alone achieves better compression performance.

Journal ArticleDOI
01 Mar 2010
TL;DR: A two-objective evolutionary algorithm is applied to generate a family of optimal quantization tables which produce different trade-offs between image compression and quality.
Abstract: The JPEG algorithm is one of the most used tools for compressing images. The main factor affecting the performance of the JPEG compression is the quantization process, which exploits the values contained in two tables, called quantization tables. The compression ratio and the quality of the decoded images are determined by these values. Thus, the correct choice of the quantization tables is crucial to the performance of the JPEG algorithm. In this paper, a two-objective evolutionary algorithm is applied to generate a family of optimal quantization tables which produce different trade-offs between image compression and quality. Compression is measured in terms of difference in percentage between the sizes of the original and compressed images, whereas quality is computed as mean squared error between the reconstructed and the original images. We discuss the application of the proposed approach to well-known benchmark images and show how the quantization tables determined by our method improve the performance of the JPEG algorithm with respect to the default tables suggested in Annex K of the JPEG standard.

Proceedings ArticleDOI
15 Jun 2010
TL;DR: The architecture and VHDL design of 2-D DCT, combined with quantization and zig-zag arrangement, is described in this paper and aimed to be implemented in cheap Spartan-3E XC3S500 FPGA.
Abstract: Two dimensional DCT takes important role in JPEG image compression. Architecture and VHDL design of 2-D DCT, combined with quantization and zig-zag arrangement, is described in this paper. The architecture is used in JPEG image compression. DCT calculation used in this paper is made using scaled DCT. The output of DCT module needs to be multiplied with post-scaler value to get the real DCT coefficients. Post-scaling process is done together with quantization process. 2-D DCT is computed by combining two 1-D DCT that connected by a transpose buffer. This design aimed to be implemented in cheap Spartan-3E XC3S500 FPGA. The 2-D DCT architecture uses 3174 gates, 1145 Slices, 21 I/O pins, and 11 multipliers of one Xilinx Spartan-3E XC3S500E FPGA and reaches an operating frequency of 84.81 MHz. One input block with 8×8 elements of 8 bits each is processed in 2470 ns and pipeline latency is 123 clock cycles.

Book ChapterDOI
10 Nov 2010
TL;DR: A new method based on the firefly algorithm to construct the codebook of vector quantization is proposed, which gets higher quality than those generated from the LBG and PSO-LBG algorithms, but there are not significantly different to the HBMO- LBG algorithm.
Abstract: The vector quantization (VQ) was a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde-Buzo-Gray (LBG) algorithm always generated local optimal codebook. This paper proposed a new method based on the firefly algorithm to construct the codebook of vector quantization. The proposed method uses LBG method as the initial of firefly algorithm to develop the VQ algorithm. This method is called FF-LBG algorithm. The FF-LBG algorithm is compared with the other three methods that are LBG, PSO-LBG and HBMO-LBG algorithms. Experimental results showed that the computation of this proposed FF-LBG algorithm is faster than the PSO-LBG, and the HBMO-LBG algorithms. Furthermore, the reconstructured images get higher quality than those generated from the LBG and PSO-LBG algorithms, but there are not significantly different to the HBMO-LBG algorithm.

Proceedings ArticleDOI
03 Dec 2010
TL;DR: This paper proposes an anti-forensic technique capable of removing artifacts indicative of wavelet-based image compression from an image, and shows that this technique is capable of fooling current forensic image compression detection algorithms 100% of the time.
Abstract: Because digital images can be modified with relative ease, considerable effort has been spent developing image forensic algorithms capable of tracing an image's processing history. In contrast to this, relatively little consideration has been given to anti-forensic operations designed to mislead forensic techniques. In this paper, we propose an anti-forensic technique capable of removing artifacts indicative of wavelet-based image compression from an image. Our technique operates by adding anti-forensic dither to a previously compressed image's wavelet coefficients so that the anti-forensically modified wavelet coefficient distribution matches a model of the coefficient distribution before compression. Simulation results show that our algorithm is capable of fooling current forensic image compression detection algorithms 100% of the time.

Journal ArticleDOI
TL;DR: It is shown that the quality factor in a JPEG image can be an embedding space, and the ability of embedding a message to a JPEG images by managing JPEG quantization tables (QTs) is discussed, which can be used as a tool for secret communication.
Abstract: Protecting privacy for exchanging information through the media has been a topic researched by many people. Up to now, cryptography has always had its ultimate role in protecting the secrecy between the sender and the intended receiver. However, nowadays steganography techniques are used increasingly besides cryptography to add more protective layer to the hidden data. In this letter, we show that the quality factor in a JPEG image can be an embedding space, and we discuss the ability of embedding a message to a JPEG image by managing JPEG quantization tables (QTs). In combination with some permutation algorithms, this scheme can be used as a tool for secret communication. The proposed method can achieve satisfactory decoded results with this straightforward JPEG double compression strategy.

Book ChapterDOI
01 Oct 2010
TL;DR: An algorithm is proposed which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor and results prove the effectiveness of the proposed method.
Abstract: With the availability of various digital image edit tools, seeing is no longer believing. In this paper, we focus on tampered region localization for image forensics. We propose an algorithm which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor. We find the tampered region and the unchanged region have different responses for JPEG compression. The tampered region has stronger high frequency quantization noise than the unchanged region. We employ PCA to separate different spatial frequencies quantization noises, i.e. low, medium and high frequency quantization noise, and extract high frequency quantization noise for tampered region localization. Post-processing is involved to get final localization result. The experimental results prove the effectiveness of our proposed method.

Journal ArticleDOI
TL;DR: An approach to generate reference data from the original image by encoding different types of blocks into different number of bits by reducing the amount of embedding data while maintaining good recovery quality.
Abstract: When hiding information in an image for self recovery, the amount of embedding data affects the embedding influence and the recovery quality. The purpose of this paper is to reduce the amount of embedding data while maintaining good recovery quality. We propose an approach to generate reference data from the original image by encoding different types of blocks into different number of bits. In reconstructing the reference image, a fast inpainting method is used to recover the contents of corrupted regions accompanied with the extracted bits.

Journal ArticleDOI
TL;DR: A new spectral lossy compression method which can reduce required memories; adaptively retrieve original images by using only spectral phase information; increase the peak-to-correlation energy (PCE) at the output of the correlator; and be easily employed in major encryption techniques is described.
Abstract: By using only phase information, this paper describes a new spectral lossy compression method which can: reduce required memories; adaptively retrieve original images by using only spectral phase information; increase the peak-to-correlation energy (PCE) at the output of the correlator; and be easily employed in major encryption techniques. To increase the compression ratio of the proposed method, an optimal phase coding based on 'a fading grid' is performed. In fact, a variable number of quantization bits has been used to quantize phase information depending on the importance of the spectral phases. The phase information could be classified according to the concept of 'RMS duration'. Many simulations have been carried out. Our experimental results corroborate the performance of the proposed new method.

Proceedings ArticleDOI
10 May 2010
TL;DR: A novel approach based on discrete orthogonal Tchebichef Moment for efficient image compression is proposed, which incorporates simplified mathematical framework techniques using matrices, as well as a block-wise reconstruction technique to eliminate possible occurrences of numerical instabilities at higher moment orders.
Abstract: Orthogonal moment functions have long been used in image analysis. This paper proposes a novel approach based on discrete orthogonal Tchebichef Moment for efficient image compression. The method incorporates simplified mathematical framework techniques using matrices, as well as a block-wise reconstruction technique to eliminate possible occurrences of numerical instabilities at higher moment orders. The comparison between Tchebichef Moment compression and JPEG compression has been done. The results show significant advantages for Tchebichef Moment in terms of its image quality and compression rate. Tchebichef moment provides a more compact support to the image via sub-block reconstruction for compression. Tchebichef Moment Compression has clear potential to perform better for broader domain on real digital images and graphically generated images.

Proceedings ArticleDOI
04 Nov 2010
TL;DR: A combined feature extraction method which is based on DWT and DCT for face recognition and can gain higher recognition rate than the traditional PCA algorithm is proposed.
Abstract: Discrete wavelet transform has both good qualities in time domain and frequency domain which is an ideal tool in analyzing unsteady signals. Discrete cosine transform is one of the approaches used in image compressing which is also used to extract features. This paper proposes a combined feature extraction method which is based on DWT and DCT for face recognition. First the original face image is decomposed by 2-dimentional DWT, then the 2-dimentional DCT is applied to the low frequency approximation image obtained from previous step. In the end, using the DCT coefficient, a SVM classifier is built and face image can be recognized. The experiment carried on ORL-DATABASE shows that the above-mentioned feature extraction method can gain higher recognition rate than the traditional PCA algorithm.