Watermarking digital image and video data. A state-of-the-art overview
TL;DR: The authors begin by discussing the need for watermarking and the requirements and go on to discuss digitalWatermarking techniques based on correlation and techniques that are notbased on correlation.
Abstract: The authors begin by discussing the need for watermarking and the requirements. They go on to discuss digital watermarking techniques based on correlation and techniques that are not based on correlation.
Citations
More filters
••
TL;DR: In this paper, a generalization of the well-known least significant bit (LSB) modification is proposed as the data-embedding method, which introduces additional operating points on the capacity-distortion curve.
Abstract: We present a novel lossless (reversible) data-embedding technique, which enables the exact recovery of the original host signal upon extraction of the embedded information. A generalization of the well-known least significant bit (LSB) modification is proposed as the data-embedding method, which introduces additional operating points on the capacity-distortion curve. Lossless recovery of the original is achieved by compressing portions of the signal that are susceptible to embedding distortion and transmitting these compressed descriptions as a part of the embedded payload. A prediction-based conditional entropy coder which utilizes unaltered portions of the host signal as side-information improves the compression efficiency and, thus, the lossless data-embedding capacity.
1,058 citations
••
TL;DR: Circular interpretation of bijective transformations is proposed to implement a method that fulfills all quality and functionality requirements of lossless watermarking.
Abstract: The need for reversible or lossless watermarking methods has been highlighted in the literature to associate subliminal management information with losslessly processed media and to enable their authentication. The paper first analyzes the specificity and the application scope of lossless watermarking methods. It explains why early attempts to achieve reversibility are not satisfactory. They are restricted to well-chosen images, strictly lossless context and/or suffer from annoying visual artifacts. Circular interpretation of bijective transformations is proposed to implement a method that fulfills all quality and functionality requirements of lossless watermarking. Results of several bench tests demonstrate the validity of the approach.
438 citations
••
IBM1
TL;DR: By extending the intrinsic generalization and memorization capabilities of deep neural networks, the models to learn specially crafted watermarks at training and activate with pre-specified predictions when observing the watermark patterns at inference, this paper generalizes the "digital watermarking'' concept from multimedia ownership verification to deep neural network (DNN) models.
Abstract: Deep learning technologies, which are the key components of state-of-the-art Artificial Intelligence (AI) services, have shown great success in providing human-level capabilities for a variety of tasks, such as visual analysis, speech recognition, and natural language processing and etc. Building a production-level deep learning model is a non-trivial task, which requires a large amount of training data, powerful computing resources, and human expertises. Therefore, illegitimate reproducing, distribution, and the derivation of proprietary deep learning models can lead to copyright infringement and economic harm to model creators. Therefore, it is essential to devise a technique to protect the intellectual property of deep learning models and enable external verification of the model ownership. In this paper, we generalize the "digital watermarking'' concept from multimedia ownership verification to deep neural network (DNNs) models. We investigate three DNN-applicable watermark generation algorithms, propose a watermark implanting approach to infuse watermark into deep learning models, and design a remote verification mechanism to determine the model ownership. By extending the intrinsic generalization and memorization capabilities of deep neural networks, we enable the models to learn specially crafted watermarks at training and activate with pre-specified predictions when observing the watermark patterns at inference. We evaluate our approach with two image recognition benchmark datasets. Our framework accurately (100%) and quickly verifies the ownership of all the remotely deployed deep learning models without affecting the model accuracy for normal input data. In addition, the embedded watermarks in DNN models are robust and resilient to different counter-watermark mechanisms, such as fine-tuning, parameter pruning, and model inversion attacks.
405 citations
Additional excerpts
...copyright of proprietary multimedia content [34, 45, 50]....
[...]
••
TL;DR: In this paper, the authors propose a hierarchical watermarking scheme that divides the image into blocks in a multilevel hierarchy and calculates block signatures in this hierarchy. But the method is vulnerable to VQ counterfeiting attacks.
Abstract: Several fragile watermarking schemes presented in the literature are either vulnerable to vector quantization (VQ) counterfeiting attacks or sacrifice localization accuracy to improve security. Using a hierarchical structure, we propose a method that thwarts the VQ attack while sustaining the superior localization properties of blockwise independent watermarking methods. In particular, we propose dividing the image into blocks in a multilevel hierarchy and calculating block signatures in this hierarchy. While signatures of small blocks on the lowest level of the hierarchy ensure superior accuracy of tamper localization, higher level block signatures provide increasing resistance to VQ attacks. At the top level, a signature calculated using the whole image completely thwarts the counterfeiting attack. Moreover, "sliding window" searches through the hierarchy enable the verification of untampered regions after an image has been cropped. We provide experimental results to demonstrate the effectiveness of our method.
390 citations
••
TL;DR: This paper proposes a wavelet-tree-based blind watermarking scheme for copyright protection that embeds each watermark bit in perceptually important frequency bands, which renders the mark more resistant to frequency based attacks.
Abstract: This paper proposes a wavelet-tree-based blind watermarking scheme for copyright protection. The wavelet coefficients of the host image are grouped into so-called super trees. The watermark is embedded by quantizing super trees. The trees are so quantized that they exhibit a large enough statistical difference, which will later be used for watermark extraction. Each watermark bit is embedded in perceptually important frequency bands, which renders the mark more resistant to frequency based attacks. Also, the watermark is spread throughout large spatial regions. This yields more robustness against time domain geometric attacks. Examples of various attacks will be given to demonstrate the robustness of the proposed technique.
329 citations
References
More filters
••
TL;DR: It is argued that insertion of a watermark under this regime makes the watermark robust to signal processing operations and common geometric transformations provided that the original image is available and that it can be successfully registered against the transformed watermarked image.
Abstract: This paper presents a secure (tamper-resistant) algorithm for watermarking images, and a methodology for digital watermarking that may be generalized to audio, video, and multimedia data. We advocate that a watermark should be constructed as an independent and identically distributed (i.i.d.) Gaussian random vector that is imperceptibly inserted in a spread-spectrum-like fashion into the perceptually most significant spectral components of the data. We argue that insertion of a watermark under this regime makes the watermark robust to signal processing operations (such as lossy compression, filtering, digital-analog and analog-digital conversion, requantization, etc.), and common geometric transformations (such as cropping, scaling, translation, and rotation) provided that the original image is available and that it can be successfully registered against the transformed watermarked image. In these cases, the watermark detector unambiguously identifies the owner. Further, the use of Gaussian noise, ensures strong resilience to multiple-document, or collusional, attacks. Experimental results are provided to support these claims, along with an exposition of pending open problems.
6,194 citations
••
TL;DR: This work explores both traditional and novel techniques for addressing the data-hiding process and evaluates these techniques in light of three applications: copyright protection, tamper-proofing, and augmentation data embedding.
Abstract: Data hiding, a form of steganography, embeds data into digital media for the purpose of identification, annotation, and copyright. Several constraints affect this process: the quantity of data to be hidden, the need for invariance of these data under conditions where a "host" signal is subject to distortions, e.g., lossy compression, and the degree to which the data must be immune to interception, modification, or removal by a third party. We explore both traditional and novel techniques for addressing the data-hiding process and evaluate these techniques in light of three applications: copyright protection, tamper-proofing, and augmentation data embedding.
3,037 citations
••
[...]
TL;DR: The paper discusses the feasibility of coding an "undetectable" digital water mark on a standard 512/spl times/512 intensity image with an 8 bit gray scale, capable of carrying such information as authentication or authorisation codes, or a legend essential for image interpretation.
Abstract: The paper discusses the feasibility of coding an "undetectable" digital water mark on a standard 512/spl times/512 intensity image with an 8 bit gray scale. The watermark is capable of carrying such information as authentication or authorisation codes, or a legend essential for image interpretation. This capability is envisaged to find application in image tagging, copyright enforcement, counterfeit protection, and controlled access. Two methods of implementation are discussed. The first is based on bit plane manipulation of the LSB, which offers easy and rapid decoding. The second method utilises linear addition of the water mark to the image data, and is more difficult to decode, offering inherent security. This linearity property also allows some image processing, such as averaging, to take place on the image, without corrupting the water mark beyond recovery. Either method is potentially compatible with JPEG and MPEG processing. >
1,407 citations
••
TL;DR: The author proposes an independent and novel approach to image coding, based on a fractal theory of iterated transformations, that relies on the assumption that image redundancy can be efficiently exploited through self-transformability on a block-wise basis and approximates an original image by a Fractal image.
Abstract: The author proposes an independent and novel approach to image coding, based on a fractal theory of iterated transformations. The main characteristics of this approach are that (i) it relies on the assumption that image redundancy can be efficiently exploited through self-transformability on a block-wise basis, and (ii) it approximates an original image by a fractal image. The author refers to the approach as fractal block coding. The coding-decoding system is based on the construction, for an original image to encode, of a specific image transformation-a fractal code-which, when iterated on any initial image, produces a sequence of images that converges to a fractal approximation of the original. It is shown how to design such a system for the coding of monochrome digital images at rates in the range of 0.5-1.0 b/pixel. The fractal block coder has performance comparable to state-of-the-art vector quantizers. >
1,386 citations
••
TL;DR: This work explores both traditional and novel techniques for addressing the data hiding process and evaluates these techniques in light of three applications: copyright protecting, tamper-proofing, and augmentation data embedding.
Abstract: Data hiding is the process of embedding data into image and audio signals. The process is constrained by the quantity of data, the need for invariance of the data under conditions where the `host' signal is subject to distortions, e.g., compression, and the degree to which the data must be immune to interception, modification, or removal. We explore both traditional and novel techniques for addressing the data hiding process and evaluate these techniques in light of three applications: copyright protecting, tamper-proofing, and augmentation data embedding.
1,343 citations