scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Fragile image watermarking with pixel-wise recovery based on overlapping embedding strategy

TL;DR: A new fragile watermarking scheme with high-quality recovery capability based on overlapping embedding strategy that can achieve better quality of recovered image compared with some of state-of-the-art schemes.
About: This article is published in Signal Processing.The article was published on 2017-09-01. It has received 156 citations till now. The article focuses on the topics: Block (data storage) & Digital watermarking.
Citations
More filters
Journal ArticleDOI
TL;DR: This work proposes a RDH scheme based on the images texture to reduce invalid shifting of pixels in histogram shifting and demonstrates that the proposed method has higher capacity and better stego-image quality than some existing RDH schemes.

126 citations


Cites methods from "Fragile image watermarking with pix..."

  • ...Data hiding methods mainly include steganography [2–5], digital watermarking [6–9], construction based data hiding [10–12] and so on....

    [...]

Journal ArticleDOI
TL;DR: Since only a portion of blocks are modified during embedding, the directly-decrypted image quality is satisfactory, and more bits can be embedded into the blocks belonging to smooth set, hence, embedding rate is acceptable.

117 citations

Journal ArticleDOI
TL;DR: Experimental results show that the proposed method is robust to the linear and nonlinear attacks and the transparency of the watermarked images has been protected.
Abstract: In this paper, a novel robust color image watermarking method based on Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) is proposed. In this method, RGB cover image is divided into red, green and blue components. DCT and DWT are applied to each color components. Grayscale watermark image is scrambled by using Arnold transform. DCT is performed to the scrambled watermark image. Transformed watermark image is then divided into equal smaller parts. DCT coefficients of each watermark parts are embedded into four DWT bands of the color components of the cover image. The robustness of the proposed color image watermarking has been demonstrated by applying various image processing operations such as rotating, resizing, filtering, jpeg compression, and noise adding to the watermarked images. Experimental results show that the proposed method is robust to the linear and nonlinear attacks and the transparency of the watermarked images has been protected.

98 citations

Journal ArticleDOI
TL;DR: A new image self-embedding scheme based on optimal iterative block truncation coding and non-uniform watermark sharing is proposed that can achieve better performance of tampering recovery than some of state-of-the-art schemes.
Abstract: Self-embedding watermarking can be used for image tampering recovery. In this work, the authors proposed a new image self-embedding scheme based on optimal iterative block truncation coding and non-uniform watermark sharing. Experimental results demonstrate that the proposed scheme can achieve better performance of tampering recovery than some of state-of-the-art schemes.

80 citations


Cites background from "Fragile image watermarking with pix..."

  • ...Probability for this scenario is 1 P((2))....

    [...]

  • ...Probability for this scenario is P((1)) P((2))....

    [...]

  • ...If a is not too large, P((1)) and P((2)) must both approximate 1....

    [...]

  • ...Therefore, the two probabilities P((1)) and P((2)) of successful restoration for erroneous bits of binary patterns in f1 subsets and reconstruction levels in f2 subsets are P ð1Þ 1⁄4 ðP ð1Þ s Þ1 1⁄4 ðP ð1Þ s Þ1 : (22) P ð2Þ 1⁄4 ðP ð2Þ s Þ2 1⁄4 ðP ð2Þ s Þ(12) N=ðm 2 K2Þ: (23)...

    [...]

  • ...Probability for this scenario is (1 P((1))) P((2))....

    [...]

Journal ArticleDOI
TL;DR: The adaptive pixel pairing (APP) and the adaptive mapping selection for the enhancement of pairwise PEE are proposed and shown to increase the similarity between pixels in a pair, by excluding the rough pixels from pairing and only putting the smooth pixels into pairs.
Abstract: Pairwise prediction-error expansion (pairwise PEE) is a recent technique for the high-dimensional reversible data hiding However, in the absence of adaptive embedding, its potential has not been fully exploited In this paper, we propose the adaptive pixel pairing (APP) and the adaptive mapping selection for the enhancement of pairwise PEE Our motivation is twofold: building a sharper 2D histogram and designing the effective 2D mapping for it In APP, we consider to increase the similarity between pixels in a pair, by excluding the rough pixels from pairing and only putting the smooth pixels into pairs In this way, the pixels in a pair have a larger possibility of being equal, and thus the resulted 2D prediction-error histogram (PEH) has lower entropy Next, the adaptive mapping selection mechanism is introduced to properly determine the optimal modification, based on “whether it fits for the resulted PEH” rather than heuristic experience The experimental results show that the proposed method has a significant improvement over the pairwise PEE

76 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Proceedings ArticleDOI
TL;DR: A new dataset, UCID (pronounced "use it") - an Uncompressed Colour Image Dataset which tries to bridge the gap between standardised image databases and objective evaluation of image retrieval algorithms that operate in the compressed domain.
Abstract: Standardised image databases or rather the lack of them are one of the main weaknesses in the field of content based image retrieval (CBIR). Authors often use their own images or do not specify the source of their datasets. Naturally this makes comparison of results somewhat difficult. While a first approach towards a common colour image set has been taken by the MPEG 7 committee 1 their database does not cater for all strands of research in the CBIR community. In particular as the MPEG-7 images only exist in compressed form it does not allow for an objective evaluation of image retrieval algorithms that operate in the compressed domain or to judge the influence image compression has on the performance of CBIR algorithms. In this paper we introduce a new dataset, UCID (pronounced ”use it”) - an Uncompressed Colour Image Dataset which tries to bridge this gap. The UCID dataset currently consists of 1338 uncompressed images together with a ground truth of a series of query images with corresponding models that an ideal CBIR algorithm would retrieve. While its initial intention was to provide a dataset for the evaluation of compressed domain algorithms, the UCID database also represents a good benchmark set for the evaluation of any kind of CBIR method as well as an image set that can be used to evaluate image compression and colour quantisation algorithms.

1,117 citations

Journal ArticleDOI
TL;DR: The main difference to the traditional methods is that the proposed scheme first segments the test image into semantically independent patches prior to keypoint extraction, and the copy-move regions can be detected by matching between these patches.
Abstract: In this paper, we propose a scheme to detect the copy-move forgery in an image, mainly by extracting the keypoints for comparison. The main difference to the traditional methods is that the proposed scheme first segments the test image into semantically independent patches prior to keypoint extraction. As a result, the copy-move regions can be detected by matching between these patches. The matching process consists of two stages. In the first stage, we find the suspicious pairs of patches that may contain copy-move forgery regions, and we roughly estimate an affine transform matrix. In the second stage, an Expectation-Maximization-based algorithm is designed to refine the estimated matrix and to confirm the existence of copy-move forgery. Experimental results prove the good performance of the proposed scheme via comparing it with the state-of-the-art schemes on the public databases.

780 citations

Journal ArticleDOI
TL;DR: This method is efficient as it only uses simple operations such as parity check and comparison between average intensities and effective because the detection is based on a hierarchical structure so that the accuracy of tamper localization can be ensured.

278 citations

Journal ArticleDOI
TL;DR: By using the proposed algorithm, a 90% tampered image can be recovered to a dim yet still recognizable condition (PSNR ~20dB).

274 citations