Analysis of non-aligned double JPEG artifacts for the localization of image forgeries
read more
Citations
An Overview on Image Forensics
Image Forgery Localization via Fine-Grained Analysis of CFA Artifacts
Double JPEG Detection in Mixed JPEG Quality Factors Using Deep Convolutional Neural Network
Deep Learning for Detecting Processing History of Images.
Toward image phylogeny forests: Automatically recovering semantically similar image relationships
References
Image forgery detection
Statistical tools for digital forensics
Exposing Digital Forgeries From JPEG Ghosts
Estimation of Primary Quantization Matrix in Double Compressed JPEG Images
Fast, automatic and fine-grained tampered JPEG image detection via DCT coefficient analysis
Related Papers (5)
Frequently Asked Questions (14)
Q2. How can the authors solve the minimization problem in a discrete parameter?
Since Q1 is a discrete parameter with a limited set of possible values, the minimization in (15) can be solved iteratively by trying every possible Q1 and using the corresponding αopt.
Q3. What is the reason for the forensic scheme?
By assuming to have for each analyzed image the corresponding binary mask whose 32 × 32 central portion indicates forged blocks, a comparison between the algorithm output detection map and the known tampering mask will allow to estimate the error rates of the forensic scheme, measured as false alarm probability Pfa and missed detection probability Pmd.
Q4. What is the way to assess the effects of different DCT coefficients?
As to the effects of the cumulation of different DCT coefficients, the best results are obtained by considering the first 6 coefficients with the simplified map: when considering a higher number of coefficients the AUC values decrease, suggesting that NA-DJPG artifacts can not be reliably detected at the higher frequencies.
Q5. What is the ROC curve for the nMNF detector?
Since the ROC curve is a two dimensional plot of Pd versus Pfa as the decision threshold of the detector is varied, the authors adopt the area under the ROC curve (AUC) in order to summarize the performance of the detector with a unique scalar value.
Q6. what is the probability distribution of a DCT coefficient?
(11)If multiple DCT coefficients within the same 8 × 8 block are considered, by assuming that they are independently distributed the authors can express the likelihood ratio corresponding to the block at position (i, j) asL(i, j) = ∏ k L(xk(i, j)) (12)where xk(i, j) denotes the kth DCT coefficient within the block at position (i, j)1.
Q7. What is the likelihood map of a JPEG image?
The likelihood map obtained using such simplifications can be expressed asL(i, j) ≈ ∏ k nQ(xk(i, j)) b (13)where b = −1 (SCF) or b = 1 (DCF), and depends only on compression parameters, i.e., Q1, Q2, having removed any dependencies from the image content.
Q8. What is the common scenario for a forger to use?
The authors can assume that the forger disrupts the JPEG compression statistics in the tampered area: examples could be a cut and paste from either a non compressed image or a resized image, or the insertion of computer generated content.
Q9. How can the authors simplify the DCT coefficients of a NA-DJPG?
if the authors can assume that the histogram of the original DCT coefficients is locally uniform, that is p0(u) is smooth, the authors can simplifyp1(x) ≈ { Q1p0(x) x = kQ10 elsewhere (8)Hence, if the authors assume that the JPEG approximation error due to the last compression is smaller than Q1, and thanks to (7), the authors have that (4) can be simplified topQ(x;Q1) ≈ nQ(x) · pNQ(x), x = 0. (9) where nQ(x) = nQ,0(x) ∗ gQ(x) andnQ,0(x) { Q1 x = kQ10 elsewhere (10)In Fig. 1 the models proposed in (4), (9), and (7) are compared with the histograms of unquantized DCT coefficients of a NA-DJPG compressed and a singly compressed image: in both cases there is a good agreement between the proposed models and the real distributions.
Q10. What is the likelihood function for the DCT coefficient x?
the authors should determine the shift (r, c) between the1With a slight abuse of notation, the authors use the same symbol L(x) even if for different k the authors have different likelihood functions.
Q11. What is the likelihood distribution of a DCT coefficient x?
Given p(x|H1) and p(x|H0), a DCT coefficient x can be classified as belonging to one of the two models according to the value of the likelihood ratioL(x) = p(x|H1) p(x|H0) .
Q12. How do the authors compute the AUC for the different types of DCT coefficients?
In all cases, likelihood maps are obtained by cumulating different numbers of DCT coefficients for each block, starting from the DC coefficient and scanning the coefficients in zig-zag order.
Q13. What is the purpose of the proposed work?
For the experimental validation of the proposed work, the authors have built an image dataset composed by 100 non-compressed TIFF images, having heterogeneous contents, coming from three different digital cameras (namely Nikon D90, Canon EOS 450D, Canon EOS 5D) and each acquired at its highest resolution; each test has been performed by cropping a central portion with size 1031 × 1031: this choice allows us to still have a 1024× 1024 image after randomly cropping a number or rows and columns between 0 and 7.
Q14. What is the method for estimating the distribution of the unquantized DCT coefficients?
2) Estimation of p0(u): Following the observations in [11], the authors propose to approximate the distribution of the unquantized DCT coefficients using the histogram of the DCT coefficients of the decompressed image computed after the DCT grid is suitably shifted with respect to the upper left corner.