scispace - formally typeset
Search or ask a question

Showing papers on "Discrete cosine transform published in 2014"


Book
26 Jan 2014
TL;DR: An introduction to the algorithms and architectures that form the underpinnings of the image and video compressions standards, including JPEG, H.261 and H.263, while fully addressing the architecturalconsiderations involved when implementing these standards.
Abstract: From the Publisher: Image and Video Compression Standards: Algorithms and Architectures, Second Edition presents an introduction to thealgorithms and architectures that form the underpinnings of the imageand video compressions standards, including JPEG (compression ofstill-images), H.261 and H.263 (video teleconferencing), and MPEG-1and MPEG-2 (video storage and broadcasting). The next generation ofaudiovisual coding standards, such as MPEG-4 and MPEG-7, are alsobriefly described. In addition, the book covers the MPEG and DolbyAC-3 audio coding standards and emerging techniques for image andvideo compression, such as those based on wavelets and vectorquantization. Image and Video Compression Standards: Algorithms andArchitectures, Second Edition emphasizes the foundations ofthese standards; namely, techniques such as predictive coding,transform-based coding such as the discrete cosine transform (DCT),motion estimation, motion compensation, and entropy coding, as well ashow they are applied in the standards. The implementation details ofeach standard are avoided; however, the book provides all the materialnecessary to understand the workings of each of the compressionstandards, including information that can be used by the reader toevaluate the efficiency of various software and hardwareimplementations conforming to these standards. Particular emphasis isplaced on those algorithms and architectures that have been found tobe useful in practical software or hardware implementations. Image and Video Compression Standards: Algorithms andArchitectures, Second Edition uniquely covers all majorstandards (JPEG, MPEG-1, MPEG-2, MPEG-4, H.261, H.263) in asimple andtutorial manner, while fully addressing the architecturalconsiderations involved when implementing these standards. As such, itserves as a valuable reference for the graduate student, researcher orengineer. The book is also used frequently as a text for courses onthe subject, in both academic and professional settings.

726 citations


Journal ArticleDOI
TL;DR: A class of new distortion functions known as uniform embedding distortion function (UED) is presented for both side-informed and non side- informed secure JPEG steganography, which tries to spread the embedding modification uniformly to quantized discrete cosine transform (DCT) coefficients of all possible magnitudes.
Abstract: Steganography is the science and art of covert communication, which aims to hide the secret messages into a cover medium while achieving the least possible statistical detectability. To this end, the framework of minimal distortion embedding is widely adopted in the development of the steganographic system, in which a well designed distortion function is of vital importance. In this paper, a class of new distortion functions known as uniform embedding distortion function (UED) is presented for both side-informed and non side-informed secure JPEG steganography. By incorporating the syndrome trellis coding, the best codeword with minimal distortion for a given message is determined with UED, which, instead of random modification, tries to spread the embedding modification uniformly to quantized discrete cosine transform (DCT) coefficients of all possible magnitudes. In this way, less statistical detectability is achieved, owing to the reduction of the average changes of the first- and second-order statistics for DCT coefficients as a whole. The effectiveness of the proposed scheme is verified with evidence obtained from exhaustive experiments using popular steganalyzers with various feature sets on the BOSSbase database. Compared with prior arts, the proposed scheme gains favorable performance in terms of secure embedding capacity against steganalysis.

245 citations


Journal ArticleDOI
TL;DR: In this article, a blind watermarking algorithm in DCT domain using the correlation between two DCT coefficients of adjacent blocks in the same position is presented. But the proposed algorithm is tested for different attacks and it shows very good robustness under JPEG image compression as compared to existing one.
Abstract: This paper presents a novel blind watermarking algorithm in DCT domain using the correlation between two DCT coefficients of adjacent blocks in the same position. One DCT coefficient of each block is modified to bring the difference from the adjacent block coefficient in a specified range. The value used to modify the coefficient is obtained by finding difference between DC and median of a few low frequency AC coefficients and the result is normalized by DC coefficient. The proposed watermarking algorithm is tested for different attacks. It shows very good robustness under JPEG image compression as compared to existing one and also good quality of watermark is extracted by performing other common image processing operations like cropping, rotation, brightening, sharpening, contrast enhancement etc.

206 citations


Journal ArticleDOI
TL;DR: It is found that the proposed architecture involves nearly 14% less area-delay product (ADP) and 19% less energy per sample (EPS) compared to the direct implementation of the reference algorithm, on average, for integer DCT of lengths 4, 8, 16, and 32.
Abstract: In this paper, we present area- and power-efficient architectures for the implementation of integer discrete cosine transform (DCT) of different lengths to be used in High Efficiency Video Coding (HEVC). We show that an efficient constant matrix-multiplication scheme can be used to derive parallel architectures for 1-D integer DCT of different lengths. We also show that the proposed structure could be reusable for DCT of lengths 4, 8, 16, and 32 with a throughput of 32 DCT coefficients per cycle irrespective of the transform size. Moreover, the proposed architecture could be pruned to reduce the complexity of implementation substantially with only a marginal affect on the coding performance. We propose power-efficient structures for folded and full-parallel implementations of 2-D DCT. From the synthesis result, it is found that the proposed architecture involves nearly 14% less area-delay product (ADP) and 19% less energy per sample (EPS) compared to the direct implementation of the reference algorithm, on average, for integer DCT of lengths 4, 8, 16, and 32. Also, an additional 19% saving in ADP and 20% saving in EPS can be achieved by the proposed pruning algorithm with nearly the same throughput rate. The proposed architecture is found to support ultrahigh definition 7680 × 4320 at 60 frames/s video, which is one of the applications of HEVC.

184 citations


Journal ArticleDOI
01 Jan 2014-Optik
TL;DR: In this paper, the authors applied differential evolution (DE) algorithm to balance the tradeoff between robustness and imperceptibility by exploring multiple scaling factors in image watermarking.

168 citations


Journal ArticleDOI
TL;DR: The proposed method for digital watermarking based on discrete wavelet transforms, discrete cosine transforms, and singular value decomposition has been proposed and has been found to be giving superior performance for robustness and imperceptibility compared to existing methods suggested by other authors.
Abstract: In this paper an algorithm for digital watermarking based on discrete wavelet transforms (DWT), discrete cosine transforms (DCT), and singular value decomposition (SVD) has been proposed. In the embedding process, the host image is decomposed into first level DWTs. Low frequency band (LL) is transformed by DCT and SVD. The watermark image is also transformed by DCT and SVD. The S vector of watermark information is embedded in the S component of the host image. Watermarked image is generated by inverse SVD on modified S vector and original U, V vectors followed by inverse DCT and inverse DWT. Watermark is extracted using an extraction algorithm. The proposed method has been extensively tested against numerous known attacks and has been found to be giving superior performance for robustness and imperceptibility compared to existing methods suggested by other authors.

115 citations


Journal ArticleDOI
TL;DR: A novel 8-point DCT approximation that requires only 14 addition operations and no multiplications is introduced and is compared to state-of-the-art DCT approximations in terms of both algorithm complexity and peak signal-to-noise ratio.
Abstract: Video processing systems such as HEVC requiring low energy consumption needed for the multimedia market has lead to extensive development in fast algorithms for the efficient approximation of 2-D DCT transforms The DCT is employed in a multitude of compression standards due to its remarkable energy compaction properties Multiplier-free approximate DCT transforms have been proposed that offer superior compression performance at very low circuit complexity Such approximations can be realized in digital VLSI hardware using additions and subtractions only, leading to significant reductions in chip area and power consumption compared to conventional DCTs and integer transforms In this paper, we introduce a novel 8-point DCT approximation that requires only 14 addition operations and no multiplications The proposed transform possesses low computational complexity and is compared to state-of-the-art DCT approximations in terms of both algorithm complexity and peak signal-to-noise ratio The proposed DCT approximation is a candidate for reconfigurable video standards such as HEVC The proposed transform and several other DCT approximations are mapped to systolic-array digital architectures and physically realized as digital prototype circuits using FPGA technology and mapped to 45 nm CMOS technology

112 citations


Journal ArticleDOI
TL;DR: Experimental results confirm the merits of the proposed algorithm in providing the intra-frame error propagation-free advantage, the quality improvement for marked images, the compression power inherited from HEVC, and the superiority of embedding capacity for low bitrate coding when compared with the previous two algorithms for H.264/AVC.

106 citations


Journal ArticleDOI
TL;DR: The experimental results verify the significant efficiency improvement of the proposed method in output quality and energy consumption, when compared with other fusion techniques in DCT domain.

105 citations


Journal ArticleDOI
TL;DR: This letter presents a no-reference quality assessment algorithm for JPEG compressed images (NJQA), which testing on various image-quality databases demonstrates that NJQA is either competitive with or outperforms modern competing methods on JPEG images.
Abstract: This letter presents a no-reference quality assessment algorithm for JPEG compressed images (NJQA). Our method does not specifically aim to measure blockiness. Instead, quality is estimated by first counting the number of zero-valued DCT coefficients within each block, and then using a map, which we call the quality relevance map, to weight these counts. The quality relevance map for an image is a map that indicates which blocks are naturally uniform (or near-uniform) vs. which blocks have been made uniform (or near-uniform) via JPEG compression. Testing on various image-quality databases demonstrates that NJQA is either competitive with or outperforms modern competing methods on JPEG images.

104 citations


Posted Content
TL;DR: In this article, a survey for lossy image compression using Discrete Cosine Transform, it covers JPEG compression algorithm which is used for full-colour still image applications and describes all the components of it.
Abstract: Due to the increasing requirements for transmission of images in computer, mobile environments, the research in the field of image compression has increased significantly. Image compression plays a crucial role in digital image processing, it is also very important for efficient transmission and storage of images. When we compute the number of bits per image resulting from typical sampling rates and quantization methods, we find that Image compression is needed. Therefore development of efficient techniques for image compression has become necessary .This paper is a survey for lossy image compression using Discrete Cosine Transform, it covers JPEG compression algorithm which is used for full-colour still image applications and describes all the components of it.

Journal ArticleDOI
TL;DR: An effective error-based statistical feature extraction scheme that can significantly outperform the state-of-the-art method to detect double JPEG compression with the same quantization matrix.
Abstract: Detection of double JPEG compression plays an important role in digital image forensics. Some successful approaches have been proposed to detect double JPEG compression when the primary and secondary compressions have different quantization matrices. However, detecting double JPEG compression with the same quantization matrix is still a challenging problem. In this paper, an effective error-based statistical feature extraction scheme is presented to solve this problem. First, a given JPEG file is decompressed to form a reconstructed image. An error image is obtained by computing the differences between the inverse discrete cosine transform coefficients and pixel values in the reconstructed image. Two classes of blocks in the error image, namely, rounding error block and truncation error block, are analyzed. Then, a set of features is proposed to characterize the statistical differences of the error blocks between single and double JPEG compressions. Finally, the support vector machine classifier is employed to identify whether a given JPEG image is doubly compressed or not. Experimental results on three image databases with various quality factors have demonstrated that the proposed method can significantly outperform the state-of-the-art method.

Journal ArticleDOI
TL;DR: This method promises to be an effective fast TIE solver for quantitative phase imaging applications and is applicable for the case of non-uniform intensity distribution with no extra effort to extract the boundary values from the intensity derivative signals.
Abstract: The transport of intensity equation (TIE) is a two-dimensional second order elliptic partial differential equation that must be solved under appropriate boundary conditions. However, the boundary conditions are difficult to obtain in practice. The fast Fourier transform (FFT) based TIE solutions are widely adopted for its speed and simplicity. However, it implies periodic boundary conditions, which lead to significant boundary artifacts when the imposed assumption is violated. In this work, TIE phase retrieval is considered as an inhomogeneous Neumann boundary value problem with the boundary values experimentally measurable around a hard-edged aperture, without any assumption or prior knowledge about the test object and the setup. The analytic integral solution via Green's function is given, as well as a fast numerical implementation for a rectangular region using the discrete cosine transform. This approach is applicable for the case of non-uniform intensity distribution with no extra effort to extract the boundary values from the intensity derivative signals. Its efficiency and robustness have been verified by several numerical simulations even when the objects are complex and the intensity measurements are noisy. This method promises to be an effective fast TIE solver for quantitative phase imaging applications.

Journal ArticleDOI
TL;DR: This article carries out a performance evaluation of existing and new compression schemes, considering linear, autoregressive, FFT-/DCT- and wavelet-based models, by looking at their performance as a function of relevant signal statistics and results reveal that the DCT-based schemes are the best option in terms of compression efficiency but are inefficient in termsof energy consumption.
Abstract: Lossy temporal compression is key for energy-constrained wireless sensor networks (WSNs), where the imperfect reconstruction of the signal is often acceptable at the data collector, subject to some maximum error tolerance. In this article, we evaluate a number of selected lossy compression methods from the literature and extensively analyze their performance in terms of compression efficiency, computational complexity, and energy consumption. Specifically, we first carry out a performance evaluation of existing and new compression schemes, considering linear, autoregressive, FFT-/DCT- and wavelet-based models , by looking at their performance as a function of relevant signal statistics. Second, we obtain formulas through numerical fittings to gauge their overall energy consumption and signal representation accuracy. Third, we evaluate the benefits that lossy compression methods bring about in interference-limited multihop networks, where the channel access is a source of inefficiency due to collisions and transmission scheduling. Our results reveal that the DCT-based schemes are the best option in terms of compression efficiency but are inefficient in terms of energy consumption. Instead, linear methods lead to substantial savings in terms of energy expenditure by, at the same time, leading to satisfactory compression ratios, reduced network delay, and increased reliability performance.

Journal ArticleDOI
TL;DR: Improved adaptive performance of the proposed scheme is in resistant to several types of attacks in comparison with the previous schemes; the adaptive performance refers to the adaptive parameter of the luminance masking functioned to improve the performance or robustness of an image from any attacks.
Abstract: This paper proposes an adaptive watermarking scheme for e-government document images. The adaptive scheme combines the discrete cosine transform (DCT) and the singular value decomposition (SVD) using luminance masking. As a core of masking model in the human visual system (HVS), luminance masking is implemented to improve noise sensitivity. Genetic algorithm (GA), subsequently, is employed for the optimization of the scaling factor of the masking. Involving a number of steps, the scheme proposed through this study begins by calculating the mask of the host image using luminance masking. It is then continued by transforming the mask on each area into all frequencies domain. The watermark image, following this, is embedded by modifying the singular values of DCT-transformed host image with singular values of mask coefficient of host image and the control parameter of DCT-transformed watermark image using Genetic Algorithm (GA). The use of both the singular values and the control parameter respectively, in this case, is not only to improve the sensitivity of the watermark performance but also to avoid the false positive problem. The watermark image, afterwards, is extracted from the distorted images. The experiment results show the improved adaptive performance of the proposed scheme is in resistant to several types of attacks in comparison with the previous schemes; the adaptive performance refers to the adaptive parameter of the luminance masking functioned to improve the performance or robustness of an image from any attacks.

Journal ArticleDOI
TL;DR: A method to reconstruct fingerprint orientation field by weighted discrete cosine transform (DCT) that can perform well in smoothing out the noise while maintaining the orientation details in singular regions and compared with existing methods on NIST and FVC fingerprint databases.

Journal ArticleDOI
TL;DR: This paper presents a patchwork-based audio watermarking method to resist de-synchronization attacks such as pitch-scaling, time- scaling, and jitter attacks and has much higher embedding capacity.
Abstract: This paper presents a patchwork-based audio watermarking method to resist de-synchronization attacks such as pitch-scaling, time-scaling, and jitter attacks. At the embedding stage, the watermarks are embedded into the host audio signal in the discrete cosine transform (DCT) domain. Then, a set of synchronization bits are implanted into the watermarked signal in the logarithmic DCT (LDCT) domain. At the decoding stage, we analyze the received audio signal in the LDCT domain to find the scaling factor imposed by an attack. Then, we modify the received signal to remove the scaling effect, together with the embedded synchronization bits. After that, watermarks are extracted from the modified signal. Simulation results show that at the embedding rate of 10 bps, the proposed method achieves 98.9% detection rate on average under the considered de-synchronization attacks. At the embedding rate of 16 bps, it can still obtain 94.7% detection rate on average. So, the proposed method is much more robust to de-synchronization attacks than other patchwork watermarking methods. Compared with the audio watermarking methods designed for tackling de-synchronization attacks, our method has much higher embedding capacity.

Journal ArticleDOI
TL;DR: A method to detect video tampering and distinguish it from common video processing operations, such as recompression, noise, and brightness increase, using a practical watermarking scheme for real-time authentication of digital video, implemented and evaluated using the H.264/AVC codec.
Abstract: This paper presents a method to detect video tampering and distinguish it from common video processing operations, such as recompression, noise, and brightness increase, using a practical watermarking scheme for real-time authentication of digital video. In our method, the watermark signals represent the macroblock's and frame's indices, and are embedded into the nonzero quantized discrete cosine transform value of blocks, mostly the last nonzero values, enabling our method to detect spatial, temporal, and spatiotemporal tampering. Our method can be easily configured to adjust transparency, robustness, and capacity of the system according to the specific application at hand. In addition, our method takes advantage of content-based cryptography and increases the security of the system. While our method can be applied to any modern video codec, including the recently released high-efficiency video coding standard, we have implemented and evaluated it using the H.264/AVC codec, and we have shown that compared with the existing similar methods, which also embed extra bits inside video frames, our method causes significantly smaller video distortion, leading to a PSNR degradation of about 0.88 dB and structural similarity index decrease of 0.0090 with only 0.05% increase in bitrate, and with the bit correct rate of 0.71 to 0.88 after H.264/AVC recompression.

Journal ArticleDOI
TL;DR: This paper proposes a collection of twelve approximations for the 8-point DCT based on integer functions that are suitable for hardware implementation in dedicated architectures and assessed in the context of JPEG-like image compression.

Journal ArticleDOI
TL;DR: By formulating the hidden data detection as a hypothesis testing, this paper studies the most powerful likelihood ratio test for the steganalysis of Jsteg algorithm and establishes theoretically its statistical performance.
Abstract: The goal of this paper is to propose a statistical model of quantized discrete cosine transform (DCT) coefficients. It relies on a mathematical framework of studying the image processing pipeline of a typical digital camera instead of fitting empirical data with a variety of popular models proposed in this paper. To highlight the accuracy of the proposed model, this paper exploits it for the detection of hidden information in JPEG images. By formulating the hidden data detection as a hypothesis testing, this paper studies the most powerful likelihood ratio test for the steganalysis of Jsteg algorithm and establishes theoretically its statistical performance. Based on the proposed model of DCT coefficients, a maximum likelihood estimator for embedding rate is also designed. Numerical results on simulated and real images emphasize the accuracy of the proposed model and the performance of the proposed test.

Journal ArticleDOI
01 Sep 2014-Optik
TL;DR: A robust image hashing with dominant discrete cosine transform (DCT) coefficients is proposed that converts the input image to a normalized image, divides it into non-overlapping blocks, extracts dominant DCT coefficients in the first row/column of each block to construct feature matrices, and finally conducts matrix compression by calculating and quantifying column distances.

Journal ArticleDOI
TL;DR: In this paper, the authors used quantization based digital watermark encryption technology on the Electrocardiogram (ECG) to protect patient rights and information, which is the most widely used technology in the field of copyright and biological information protection.
Abstract: Watermarking is the most widely used technology in the field of copyright and biological information protection. In this paper, we use quantization based digital watermark encryption technology on the Electrocardiogram (ECG) to protect patient rights and information. Three transform domains, DWT, DCT, and DFT are adopted to implement the quantization based watermarking technique. Although the watermark embedding process is not invertible, the change of the PQRST complexes and amplitude of the ECG signal is very small and so the watermarked data can meet the requirements of physiological diagnostics. In addition, the hidden information can be extracted without knowledge of the original ECG data. In other words, the proposed watermarking scheme is blind. Experimental results verify the efficiency of the proposed scheme.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed JPEG anti-forensic method outperforms the state-of-the-art methods in a better tradeoff between the JPEG forensic undetectability and the visual quality of processed images.
Abstract: This paper proposes a JPEG anti-forensic method, which aims at removing from a given image the footprints left by JPEG compression, in both the spatial domain and DCT domain. With reasonable loss of image quality, the proposed method can defeat existing forensic detectors that attempt to identify traces of the image JPEG compression history or JPEG anti-forensic processing. In our framework, first because of a total variation-based deblocking operation, the partly recovered DCT information is thereafter used to build an adaptive local dithering signal model, which is able to bring the DCT histogram of the processed image close to that of the original one. Then, a perceptual DCT histogram smoothing is carried out by solving a simplified assignment problem, where the cost function is established as the total perceptual quality loss due to the DCT coefficient modification. The second-round deblocking and de-calibration operations successfully bring the image statistics that are used by the JPEG forensic detectors to the normal status. Experimental results show that the proposed method outperforms the state-of-the-art methods in a better tradeoff between the JPEG forensic undetectability and the visual quality of processed images. Moreover, the application of the proposed anti-forensic method in disguising double JPEG compression artifacts is proven to be feasible by experiments.

Journal ArticleDOI
TL;DR: An improved phase unwrapping algorithm that can unwrap the phase more rapidly and accurately than most previous methods, especially for large grid phase Unwrapping and where compulational cost is high is proposed.

Journal ArticleDOI
TL;DR: A steganographic scheme based on the varieties of coefficients of discrete cosine transformation of an image so that the image recovered from the modified coefficients can be transformed again to the correct data hidden coefficients.

Journal ArticleDOI
TL;DR: This study is focused on improving the recognition rate and processing time of facial recognition systems by using AdaBoost-based hypothesis to select a few hundreds of Gabor features which are potential candidates for expression recognition.
Abstract: This study is focused on improving the recognition rate and processing time of facial recognition systems. First, the skin is detected by pixel based methods to reduce the searching space for maximum rejection classifier (MRC) which detects the face. The detected face is normalized by a discrete cosine transform (DCT) and down-sampled by Bessel transform. Gabor feature extraction techniques were utilized to extract thousands of facial features that represent facial deformation patterns. An AdaBoost-based hypothesis is formulated to select a few hundreds of Gabor features which are potential candidates for expression recognition. The selected features were fed into a saturated vector machine (SVM) classifier to train it. An average recognition rate of 97.57 % and 92.33 % are registered in JAFFE and Yale databases respectively. The execution time of the proposed method is also significantly lower than others. Generally, the proposed method exhibits superior performance than other methods.

Journal ArticleDOI
TL;DR: It was concluded from the results that DDCTav performs poor and DDCTek performs slightly better than DDCTmx, which is computationally simple and easily implementable on target hardware.
Abstract: Multi sensor image fusion algorithm based on directional Discrete Cosine Transform (DDCT) - Principal Component Analysis (PCA) hybrid technique has been developed and evaluated The input images were divided into non-overlapping square blocks and the fusion process was carried out on the corresponding blocks The algorithm works in two stages In first stage, modes 0 to 8 were performed on images to be fused For each mode, the coefficients from the images to be fused are used in the fusion process The same procedure is repeated for other modes Three different fusion rules are used in fusion process viz, 1 Averaging the corresponding coefficients (DDCTav), 2 Choosing the corresponding frequency band with maximum energy (DDCTek) and 3 Choosing the corresponding coefficient with maximum absolute value (DDCTmx) between the images After this stage, there are eight fused images, one from each mode In second stage, these eight fused images are fused using PCA Performance of these algorithms were compared using fusion quality evaluation metrics such as root mean square error (RMSE), quality index (QI), spatial frequency and fusion quality index (FQI) It was concluded from the results that DDCTav performs poor and DDCTek performs slightly better than DDCTmx Moreover, DDCTek is computationally simple and easily implementable on target hardware Matlab code has been provided for better understanding

Proceedings ArticleDOI
23 May 2014
TL;DR: A new embedding algorithm (NEA) of digital watermarking is proposed and evaluated by comparing performances with Cox's algorithm, the performances of NEA will compare among other algorithms like Gaussian sequence, image fusion, nonlinear quantization embedding with various attacking conditions in near future.
Abstract: The authenticity of content or matter is crucial factors for solving the problem of copying, modifying, and distributing the intellectual properties in an illegal way. Watermarking can resolve the stealing problem of intellectual properties. This paper considers a robust image watermarking technique based on discrete wavelet transform (WDT) and discrete cosine transform (DCT) called hybrid watermarking. The hybrid watermarking is performed by two level, three level, and four level DWT followed by respective DCT on the host image. A new embedding algorithm (NEA) of digital watermarking is proposed in this paper. The simulation results are compared with Cox's additive embedding algorithm and the NEA for additive white Gaussian noise (AWGN) attack and without attack. Both algorithms use the hybrid watermarking. The NEA gives 3.04dB and 9.33dB better pick signal to noise ratio (PSNR) compared to Cox's additive algorithm for the 4 level DWT for AWGN attack and without attack, respectively. Moreover, the NEA extracts the marked image 46 times better of Cox's additive algorithm in 2 level DWT with AWGN attack. That means, the NEA can embed larger marks and high quality marks extract from the embedded watermarking even attacking condition. Though the NEA is evaluated in this paper by comparing performances with Cox's algorithm, the performances of NEA will compare among other algorithms like Gaussian sequence, image fusion, nonlinear quantization embedding with various attacking conditions in near future.

Journal ArticleDOI
TL;DR: It is shown that DCT hashing has significantly better retrieval accuracy and it is more efficient compared to other popular state-of-the-art hash algorithms.
Abstract: Descriptors such as local binary patterns perform well for face recognition. Searching large databases using such descriptors has been problematic due to the cost of the linear search, and the inadequate performance of existing indexing methods. We present Discrete Cosine Transform (DCT) hashing for creating index structures for face descriptors. Hashes play the role of keywords: an index is created, and queried to find the images most similar to the query image. Common hash suppression is used to improve retrieval efficiency and accuracy. Results are shown on a combination of six publicly available face databases (LFW, FERET, FEI, BioID, Multi-PIE, and RaFD). It is shown that DCT hashing has significantly better retrieval accuracy and it is more efficient compared to other popular state-of-the-art hash algorithms.

Journal ArticleDOI
TL;DR: The modeling performance and the data reduction feature of the GMTCM make it a desirable choice for modeling discrete or integer DCT coefficients in the real-world image or video applications, as summarized in a few of the authors' further studies on quantization design, entropy coding design, and image understanding and management.
Abstract: The distributions of discrete cosine transform (DCT) coefficients of images are revisited on a per image base. To better handle, the heavy tail phenomenon commonly seen in the DCT coefficients, a new model dubbed a transparent composite model (TCM) is proposed and justified for both modeling accuracy and an additional data reduction capability. Given a sequence of the DCT coefficients, a TCM first separates the tail from the main body of the sequence. Then, a uniform distribution is used to model the DCT coefficients in the heavy tail, whereas a different parametric distribution is used to model data in the main body. The separate boundary and other parameters of the TCM can be estimated via maximum likelihood estimation. Efficient online algorithms are proposed for parameter estimation and their convergence is also proved. Experimental results based on Kullback-Leibler divergence and χ2 test show that for real-valued continuous ac coefficients, the TCM based on truncated Laplacian offers the best tradeoff between modeling accuracy and complexity. For discrete or integer DCT coefficients, the discrete TCM based on truncated geometric distributions (GMTCM) models the ac coefficients more accurately than pure Laplacian models and generalized Gaussian models in majority cases while having simplicity and practicality similar to those of pure Laplacian models. In addition, it is demonstrated that the GMTCM also exhibits a good capability of data reduction or feature extraction-the DCT coefficients in the heavy tail identified by the GMTCM are truly outliers, and these outliers represent an outlier image revealing some unique global features of the image. Overall, the modeling performance and the data reduction feature of the GMTCM make it a desirable choice for modeling discrete or integer DCT coefficients in the real-world image or video applications, as summarized in a few of our further studies on quantization design, entropy coding design, and image understanding and management.