scispace - formally typeset
Search or ask a question

Showing papers on "Digital watermarking published in 2009"


Journal ArticleDOI
TL;DR: This paper presents a reversible or lossless watermarking algorithm for images without using a location map in most cases that employs prediction errors to embed data into an image.
Abstract: This paper presents a reversible or lossless watermarking algorithm for images without using a location map in most cases. This algorithm employs prediction errors to embed data into an image. A sorting technique is used to record the prediction errors based on magnitude of its local variance. Using sorted prediction errors and, if needed, though rarely, a reduced size location map allows us to embed more data into the image with less distortion. The performance of the proposed reversible watermarking scheme is evaluated using different images and compared with four methods: those of Kamstra and Heijmans, Thodi and Rodriguez, and Lee et al. The results clearly indicate that the proposed scheme can embed more data with less distortion.

773 citations


Journal ArticleDOI
TL;DR: How photo-response nonuniformity (PRNU) of imaging sensors can be used for a variety of important digital forensic tasks, such as device identification, device linking, recovery of processing history, and detection of digital forgeries is explained.
Abstract: The article explains how photo-response nonuniformity (PRNU) of imaging sensors can be used for a variety of important digital forensic tasks, such as device identification, device linking, recovery of processing history, and detection of digital forgeries. The PRNU is an intrinsic property of all digital imaging sensors due to slight variations among individual pixels in their ability to convert photons to electrons. Consequently, every sensor casts a weak noise-like pattern onto every image it takes. This pattern, which plays the role of a sensor fingerprint, is essentially an unintentional stochastic spread-spectrum watermark that survives processing, such as lossy compression or filtering. This tutorial explains how this fingerprint can be estimated from images taken by the camera and later detected in a given image to establish image origin and integrity. Various forensic tasks are formulated as a two-channel hypothesis testing problem approached using the generalized likelihood ratio test. The performance of the introduced forensic methods is briefly illustrated on examples to give the reader a sense of the performance.

326 citations


Journal ArticleDOI
TL;DR: A new semi-blind reference watermarking scheme based on discrete wavelet transform (DWT) and singular value decomposition (SVD) for copyright protection and authenticity and it is shown that the proposed scheme also stands with the ambiguity attack also.

269 citations


01 Jan 2009
TL;DR: This paper introduces an algorithm of digital watermarking based on Discrete Cosine Transform and Discrete Wavelet Transform which is invisible and has good robustness for some common image processing operations.
Abstract: This paper introduces an algorithm of digital watermarking based on Discrete Cosine Transform(DCT) and Discrete Wavelet Transform(DWT). Accoring to the characters of human vision, in this algorithm, the information of digital watermarking which has been discrete Cosine transformed, is put into the high frequency band of the image which has been wavelet transformed. Then distills the digital watermarking with the help of the original image and the watermarking image. The simulation results show that this algorithm is invisible and has good robustness for some common image processing operations.

199 citations


Journal ArticleDOI
01 Mar 2009
TL;DR: The aim of such a knowledge digest is for it to be used for retrieving similar images with either the same findings or differential diagnoses, and it summarizes the symbolic descriptions of the image, the symbolic description of the findings semiology, and the similarity rules that contribute to balancing the importance of previous descriptors when comparing images.
Abstract: To improve medical image sharing in applications such as e-learning or remote diagnosis aid, we propose to make the image more usable by watermarking it with a digest of its associated knowledge. The aim of such a knowledge digest (KD) is for it to be used for retrieving similar images with either the same findings or differential diagnoses. It summarizes the symbolic descriptions of the image, the symbolic descriptions of the findings semiology, and the similarity rules that contribute to balancing the importance of previous descriptors when comparing images. Instead of modifying the image file format by adding some extra header information, watermarking is used to embed the KD in the pixel gray-level values of the corresponding images. When shared through open networks, watermarking also helps to convey reliability proofs (integrity and authenticity) of an image and its KD. The interest of these new image functionalities is illustrated in the updating of the distributed users' databases within the framework of an e-learning application demonstrator of endoscopic semiology.

186 citations


Journal ArticleDOI
TL;DR: This paper proposes a content-based watermarking scheme that combines the invariant feature extraction with watermark embedding by using Tchebichef moments to realize the robustness to common image processing operations and the blind detection.

156 citations


Proceedings Article
01 Jan 2009
TL;DR: This work proposes a new, non-blind watermarking scheme called RAINBOW that is able to use delays hundreds of times smaller than existing watermarks by eliminating the interference caused by the flow in the blind case and generates orders of magnitudes lower rates of false errors than passive traffic analysis, while using only a few hundred observed packets.
Abstract: Linking network flows is an important problem in intrusion detection as well as anonymity Passive traffic analysis can link flows but requires long periods of observation to reduce errors Watermarking techniques allow for better precision and blind detection, but they do so by introducing significant delays to the traffic flow, enabling attacks that detect and remove the mark, while at the same time slowing down legitimate traffic We propose a new, non-blind watermarking scheme called RAINBOW that is able to use delays hundreds of times smaller than existing watermarks by eliminating the interference caused by the flow in the blind case As a result, our watermark is invisible to detection, as confirmed by experiments using information-theoretic detection tools We analyze the error rates of our scheme based on a mathematical model of network traffic and jitter We also validate the analysis using an implementation running on PlanetLab We find that our scheme generates orders of magnitudes lower rates of false errors than passive traffic analysis, while using only a few hundred observed packets We also extend our scheme so that it is robust to packet drops and repacketization and show that flows can still be reliably linked, though at the cost of somewhat longer observation periods

130 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed method is quite robust under either non-geometry or geometry attacks and can extract the watermark without using the original image or watermark.
Abstract: This paper proposes a blind watermarking algorithm based on maximum wavelet coefficient quantization for copyright protection. The wavelet coefficients are grouped into different block size and blocks are randomly selected from different subbands. We add different energies to the maximum wavelet coefficient under the constraint that the maximum wavelet coefficient is always maximum in a block. The watermark is embedded the local maximum coefficient which can effectively resist attacks. Also, using the block-based watermarking, we can extract the watermark without using the original image or watermark. Experimental results show that the proposed method is quite robust under either non-geometry or geometry attacks.

128 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed approach can effectively improve the quality of the watermarked image and the robustness of the embedded watermark against various attacks.

117 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed scheme not only provides satisfactory imperceptibility, but also achieves superior tamper detection and localization accuracy under different attacks, such as the cut-and-paste attack and the vector quantization (VQ) attack.
Abstract: In recent years, various fragile watermarking schemes have been proposed for image authentication and integrity verification. In this paper, a novel block-based fragile watermarking scheme for image authentication and tamper proofing is proposed. Realizing that block-wise dependency is a basic requirement for fragile watermarking schemes to withstand counterfeiting attacks, the proposed scheme uses the fuzzy c-means (FCM) clustering technique to create the relationship between image blocks. The effectiveness of the proposed scheme is demonstrated through a series of attack simulations. Experimental results show that the proposed scheme not only provides satisfactory imperceptibility, but also achieves superior tamper detection and localization accuracy under different attacks, such as the cut-and-paste attack and the vector quantization (VQ) attack.

112 citations


Journal ArticleDOI
TL;DR: A novel fragile watermarking scheme with a hierarchical mechanism, in which pixel-derived and block-derived watermark data are carried by the least significant bits of all pixels, capable of recovering the original watermarked version without any error.

Journal ArticleDOI
TL;DR: This paper presents a 3-level RDWT biometric watermarking algorithm to embed the voice biometric MFC coefficients in a color face image of the same individual for increased robustness, security and accuracy.

Journal Article
TL;DR: Compared to current watermarking algorithms which are based on the joint of DWT-DCT, proposed system is achieved significantly higher robustness against enhancement and noise addition attacks.
Abstract: In this paper, a new robust digital image watermarking algorithm based on Joint DWT-DCT Transformation is proposed. A binary watermarked logo is scrambled by Arnold cat map and embedded in certain coefficient sets of a 3-level DWT transformed of a host image. Then, DCT transform of each selected DWT sub-band is computed and the PN-sequences of the watermark bits are embedded in the middle frequencies coefficients of the corresponding DCT block. In extraction procedure, the watermarked image, which maybe attacked, is pre-filtered by combination of sharpening and Laplassian of Gaussian filters to increase distinction between host image and watermark information. Subsequently, the same procedures as the embedding process is used to extract the DCT middle frequencies of each sub-band. Finally, correlation between mid-band coefficients and PNsequences is calculated to determine watermarked bits. Experimental results show that high imperceptibility is provided as well as higher robustness against common signal processing attacks. In compare to current watermarking algorithms which are based on the joint of DWT-DCT, proposed system is achieved significantly higher robustness against enhancement and noise addition attacks.

Journal ArticleDOI
TL;DR: It is observed that agglutinative languages are somewhat more amenable to morphosyntax-based natural language watermarking and the free word order property of a language, like Turkish, is an extra bonus.

Journal ArticleDOI
TL;DR: A very high-capacity and low-distortion 3D steganography scheme based on a novel multi-layered embedding scheme to hide secret messages in the vertices of 3D polygon models that can provide much higher hiding capacity than other state-of-the-art approaches.
Abstract: In this paper, we present a very high-capacity and low-distortion 3D steganography scheme. Our steganography approach is based on a novel multi-layered embedding scheme to hide secret messages in the vertices of 3D polygon models. Experimental results show that the cover model distortion is very small as the number of hiding layers ranges from 7 to 13 layers. To the best of our knowledge, this novel approach can provide much higher hiding capacity than other state-of-the-art approaches, while obeying the low distortion and security basic requirements for steganography on 3D models.

Journal ArticleDOI
03 Feb 2009
TL;DR: An algorithm for estimating the roughness of a 3D mesh, as a local measure of geometric noise on the surface is introduced, which is based on curvature analysis on local windows of the mesh and is independent of the resolution/connectivity of the object.
Abstract: 3D models are subject to a wide variety of processing operations such as compression, simplification or watermarking, which may introduce some geometric artifacts on the shape. The main issue is to maximize the compression/simplification ratio or the watermark strength while minimizing these visual degradations. However few algorithms exploit the human visual system to hide these degradations, while perceptual attributes could be quite relevant for this task. Particularly, the masking effect defines the fact that one visual pattern can hide the visibility of another. In this context we introduce an algorithm for estimating the roughness of a 3D mesh, as a local measure of geometric noise on the surface. Indeed, a textured (or rough) region is able to hide geometric distortions much better than a smooth one. Our measure is based on curvature analysis on local windows of the mesh and is independent of the resolution/connectivity of the object. The accuracy and the robustness of our measure, together with its relevance regarding visual masking have been demonstrated through extensive comparisons with state-of-the-art and subjective experiment. Two applications are also presented, in which the roughness is used to lead (and improve) respectively compression and watermarking algorithms.

Journal ArticleDOI
TL;DR: The performance of a fragile watermarking method based on discrete cosine transform (DCT) has been improved in this paper by using intelligent optimization algorithms (IOA), namely genetic algorithm, differential evolution algorithm, clonal selection algorithm and particle swarm optimization algorithm.

Journal ArticleDOI
TL;DR: Simulation results show that MPM is robust against various common attacks such as noise addition, filtering, echo, MP3 compression, etc. and provides more robustness and inaudibility of the watermark insertion.
Abstract: This paper presents a Multiplicative Patchwork Method (MPM) for audio watermarking. The watermark signal is embedded by selecting two subsets of the host signal features and modifying one subset multiplicatively regarding the watermark data, whereas another subset is left unchanged. The method is implemented in wavelet domain and approximation coefficients are used to embed data. In order to have an error-free detection, the watermark data is inserted only in the frames where the ratio of the energy of subsets is between two predefined values. Also in order to control the inaudibility of watermark insertion, we use an iterative algorithm to reach a desired quality for the watermarked audio signal. The quality of watermarked signal is evaluated in each iteration using Perceptual Evaluation of Audio Quality (PEAQ) method. The probability of error is also derived for the watermarking scheme and simulation results prove the validity of the analytical derivations. Simulation results show that MPM is robust against various common attacks such as noise addition, filtering, echo, MP3 compression, etc. In comparison to the original patchwork method and its modified versions, and some recent methods, MPM provides more robustness and inaudibility of the watermark insertion.

Journal ArticleDOI
TL;DR: This paper presents a lossless watermarking scheme in the sense that the original image can be exactly recovered from the watermarked one, with the purpose of verifying the integrity and authenticity of medical images.
Abstract: This paper presents a lossless watermarking scheme in the sense that the original image can be exactly recovered from the watermarked one, with the purpose of verifying the integrity and authenticity of medical images. In addition, the scheme has the capability of not introducing any embedding-induced distortion in the region of interest (ROI) of a medical image. Difference expansion of adjacent pixel values is employed to embed several bits. A region of embedding, which is represented by a polygon, is chosen intentionally to prevent introducing embedding distortion in the ROI. Only the vertex information of a polygon is transmitted to the decoder for reconstructing the embedding region, which improves the embedding capacity considerably. The digital signature of the whole image is embedded for verifying the integrity of the image. An identifier presented in electronic patient record (EPR) is embedded for verifying the authenticity by simultaneously processing the watermarked image and the EPR. Combining with fingerprint system, patient’s fingerprint information is embedded into several image slices and then extracted for verifying the authenticity.

Journal ArticleDOI
TL;DR: Practical and theoretical analyses of the security offered by watermarking and data hiding methods based on spread spectrum reveal fundamental limits and bounds on security and provide insight into other properties, such as the impact of the embedding parameters, and the tradeoff between robustness and security.
Abstract: This paper presents both theoretical and practical analyses of the security offered by watermarking and data hiding methods based on spread spectrum. In this context, security is understood as the difficulty of estimating the secret parameters of the embedding function based on the observation of watermarked signals. On the theoretical side, the security is quantified from an information-theoretic point of view by means of the equivocation about the secret parameters. The main results reveal fundamental limits and bounds on security and provide insight into other properties, such as the impact of the embedding parameters, and the tradeoff between robustness and security. On the practical side, workable estimators of the secret parameters are proposed and theoretically analyzed for a variety of scenarios, providing a comparison with previous approaches, and showing that the security of many schemes used in practice can be fairly low.

Journal ArticleDOI
TL;DR: An image source coding forensic detector is constructed that identifies which source encoder is applied, what the coding parameters are along with confidence measures of the result, and shows that the proposed system provides trustworthy performance.
Abstract: Recent development in multimedia processing and network technologies has facilitated the distribution and sharing of multimedia through networks, and increased the security demands of multimedia contents. Traditional image content protection schemes use extrinsic approaches, such as watermarking or fingerprinting. However, under many circumstances, extrinsic content protection is not possible. Therefore, there is great interest in developing forensic tools via intrinsic fingerprints to solve these problems. Source coding is a common step of natural image acquisition, so in this paper, we focus on the fundamental research on digital image source coder forensics via intrinsic fingerprints. First, we investigate the unique intrinsic fingerprint of many popular image source encoders, including transform-based coding (both discrete cosine transform and discrete wavelet transform based), subband coding, differential image coding, and also block processing as the traces of evidence. Based on the intrinsic fingerprint of image source encoders, we construct an image source coding forensic detector that identifies which source encoder is applied, what the coding parameters are along with confidence measures of the result. Our simulation results show that the proposed system provides trustworthy performance: for most test cases, the probability of detecting the correct source encoder is over 90%.

Journal ArticleDOI
TL;DR: In this paper, a new rotation and scaling invariant image watermarking scheme is proposed based on rotation invariant feature and image normalization, and the mathematical relationship between fidelity and robustness is established.
Abstract: In this paper, a new rotation and scaling invariant image watermarking scheme is proposed based on rotation invariant feature and image normalization. A mathematical model is established to approximate the image based on the mixture generalized Gaussian distribution, which can facilitate the analysis of the watermarking processes. Using maximum a posteriori probability based image segmentation, the cover image is segmented into several homogeneous areas. Each region can be represented by a generalized Gaussian distribution, which is critical for the analysis of the watermarking processes mathematically. The rotation invariant features are extracted from the segmented areas and are selected as reference points. Subregions centered at the feature points are used for watermark embedding and extraction. Image normalization is applied to the subregions to achieve scaling invariance. Meanwhile, the watermark embedding and extraction schemes are analyzed mathematically based on the established mathematical model. The watermark embedding strength is adjusted adaptively using the noise visibility function and the probability of error is analyzed mathematically. The mathematical relationship between fidelity and robustness is established. The experimental results show the effectiveness and accuracy of the proposed scheme.

Journal ArticleDOI
TL;DR: The experimental results demonstrate the superiority of the proposed reversible visible watermarking scheme compared to the existing methods, and adopts data compression for further reduction in the recovery packet size and improvement in embedding capacity.
Abstract: A reversible (also called lossless, distortion-free, or invertible) visible watermarking scheme is proposed to satisfy the applications, in which the visible watermark is expected to combat copyright piracy but can be removed to losslessly recover the original image. We transparently reveal the watermark image by overlapping it on a user-specified region of the host image through adaptively adjusting the pixel values beneath the watermark, depending on the human visual system-based scaling factors. In order to achieve reversibility, a reconstruction/recovery packet, which is utilized to restore the watermarked area, is reversibly inserted into non-visibly-watermarked region. The packet is established according to the difference image between the original image and its approximate version instead of its visibly watermarked version so as to alleviate its overhead. For the generation of the approximation, we develop a simple prediction technique that makes use of the unaltered neighboring pixels as auxiliary information. The recovery packet is uniquely encoded before hiding so that the original watermark pattern can be reconstructed based on the encoded packet. In this way, the image recovery process is carried out without needing the availability of the watermark. In addition, our method adopts data compression for further reduction in the recovery packet size and improvement in embedding capacity. The experimental results demonstrate the superiority of the proposed scheme compared to the existing methods.

Journal ArticleDOI
TL;DR: This paper surveys the hardware assisted solutions proposed in the literature for watermarking of multimedia objects to achieve low power usage, real-time performance, reliability, and ease of integration with existing consumer electronic devices.

Journal ArticleDOI
TL;DR: A new scaling-based image-adaptive watermarking system has been presented, which exploits human visual model for adapting the watermark data to local properties of the host image and its improved robustness is due to embedding in the low-frequency wavelet coefficients and optimal control of its strength factor from HVS point of view.
Abstract: In this paper, a new scaling-based image-adaptive watermarking system has been presented, which exploits human visual model for adapting the watermark data to local properties of the host image. Its improved robustness is due to embedding in the low-frequency wavelet coefficients and optimal control of its strength factor from HVS point of view. Maximum likelihood (ML) decoder is used aided by the channel side information. The performance of the proposed scheme is analytically calculated and verified by simulation. Experimental results confirm the imperceptibility of the proposed method and its higher robustness against attacks compared to alternative watermarking methods in the literature.

Journal ArticleDOI
TL;DR: An adjacent-block based statistical detection method for self-embedding watermarking techniques to accurately identify the tampered blocks, and an analytical analysis of the tamper detection performance is given.

Proceedings ArticleDOI
07 Sep 2009
TL;DR: This paper proposes an efficient buyer-seller watermarking protocol based on homomorphic public-key cryptosystem and composite signal representation in the encrypted domain and results confirm the efficiency of the proposed solution.
Abstract: Buyer-seller watermarking protocols integrate watermarking techniques with cryptography, for copyright protection, piracy tracing, and privacy protection. In this paper, we propose an efficient buyer-seller watermarking protocol based on homomorphic public-key cryptosystem and composite signal representation in the encrypted domain. A recently proposed composite signal representation allows us to reduce both the computational overhead and the large communication bandwidth which are due to the use of homomorphic public-key encryption schemes. Both complexity analysis and simulation results confirm the efficiency of the proposed solution, suggesting that this technique can be successfully used in practical applications.

Journal ArticleDOI
TL;DR: The experimental results show that the watermarked image looks visually identical to the original and the watermark can be effectively extracted upon image processing attacks.
Abstract: This paper proposes a wavelet-tree-based watermarking method using distance vector of binary cluster for copyright protection In the proposed method, wavelet trees are classified into two clusters using the distance vector to denote binary watermark bits The two smallest wavelet coefficients in a wavelet tree are used to reduce distortion of a watermarked image The distance vector, which is obtained from the two smallest coefficients of a wavelet tree, is quantized to decrease image distortion The trees are classified into two clusters so that they exhibit a sufficiently large statistical difference based on the distance vector, which difference is then used for subsequent watermark extraction We compare the statistical difference and the distance vector of a wavelet tree to decide which watermark bit is embedded in the embedding process The experimental results show that the watermarked image looks visually identical to the original and the watermark can be effectively extracted upon image processing attacks

Proceedings ArticleDOI
16 Dec 2009
TL;DR: This paper presents a review of some of the recent research in watermarking techniques for plain text documents, and discusses the main contributions, advantages and drawbacks of different methods used for text water marking in past.
Abstract: Copyright protection of plain text while traveling over the internet is very crucial Digital watermarking provides the complete copyright protection solution for this problem Text being the most dominant medium travelling over the internet needs absolute protection Text watermarking techniques have been developed in past to protect the text from illegal copying, redistribution and to prevent copyright violations This paper presents a review of some of the recent research in watermarking techniques for plain text documents The reviewed approaches are classified into three categories, the image based approach, the syntactic approach and the semantic approach This paper discusses the main contributions, advantages and drawbacks of different methods used for text watermarking in past

Journal ArticleDOI
TL;DR: This paper proposes a novel hiding data scheme with distortion tolerance that not only can prevent the quality of the processed image from being seriously degraded, but also can simultaneously achieve distortion tolerance.