scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Video authentication in digital forensic

01 Feb 2015-pp 659-663
TL;DR: An algorithm that is divided in two parts: computing the repeated frames by processing the image pixels to produce a frame-by-frame motion energy time and computing the tampering attack and its location with the help of the Support Vector Machine helps to predict whether the given video has been tampered or not.
Abstract: The large amount of video content is being transmitted over internet and other channels. With the help of existing multimedia editing tools one can easily change the content of data which lead to lose the authenticity of the information. Thus, it becomes necessary to develop different methods by which the authenticity of the videos can be confirmed. In the past researchers have proposed several methods for authentication of videos. This paper presents an algorithm that is divided in two parts: computing the repeated frames by processing the image pixels to produce a frame-by-frame motion energy time and computing the tampering attack and its location with the help of the Support Vector Machine. This helps to predict whether the given video has been tampered or not.
Citations
More filters
Journal ArticleDOI
TL;DR: A survey on passive video tampering detection methods is presented; the preliminaries of video files required for understanding video tampering forgery are presented; some open issues are identified that help to identify new research areas in passiveVideo tampering detection.

91 citations


Cites background or methods from "Video authentication in digital for..."

  • ...…in dynamic background videos Frame repetition and deletion Motion energy at spatial region of interest (SROI), average object area and entropy (Gupta et al., 2015) Works well on videos with high motion content (Su et al., 2009) used MCEA for frame deletion detection, which is a side effect of…...

    [...]

  • ...(Gupta et al., 2015) proposed methods for detecting frame repetition and deletion....

    [...]

  • ...Frame repetition and deletion Motion energy at spatial region of interest (SROI), average object area and entropy (Gupta et al., 2015) Works well on videos with high motion content...

    [...]

Journal ArticleDOI
TL;DR: This paper presents a comprehensive and scrutinizing bibliography addressing the published literature in the field of passive-blind video content authentication, with primary focus on forgery/tamper detection, video re-capture and phylogeny detection, and video anti- Forensics and counter anti-forensics.
Abstract: In this digital day and age, we are becoming increasingly dependent on multimedia content, especially digital images and videos, to provide a reliable proof of occurrence of events. However, the availability of several sophisticated yet easy-to-use content editing software has led to great concern regarding the trustworthiness of such content. Consequently, over the past few years, visual media forensics has emerged as an indispensable research field, which basically deals with development of tools and techniques that help determine whether or not the digital content under consideration is authentic, i.e., an actual, unaltered representation of reality. Over the last two decades, this research field has demonstrated tremendous growth and innovation. This paper presents a comprehensive and scrutinizing bibliography addressing the published literature in the field of passive-blind video content authentication, with primary focus on forgery/tamper detection, video re-capture and phylogeny detection, and video anti-forensics and counter anti-forensics. Moreover, the paper intimately analyzes the research gaps found in the literature, provides worthy insight into the areas, where the contemporary research is lacking, and suggests certain courses of action that could assist developers and future researchers explore new avenues in the domain of video forensics. Our objective is to provide an overview suitable for both the researchers and practitioners already working in the field of digital video forensics, and for those researchers and general enthusiasts who are new to this field and are not yet completely equipped to assimilate the detailed and complicated technical aspects of video forensics.

81 citations


Cites methods or result from "Video authentication in digital for..."

  • ...In [70], a frame-duplication and deletion detection technique was proposed....

    [...]

  • ...Those techniques that did not rely too heavily on the visual contents or characteristics of the test data generated comparable results [70, 77, 79, 89]....

    [...]

Journal ArticleDOI
TL;DR: A two-step algorithm is proposed, in which the suspicious frames are identified, their features are extracted and compared with other frames of the test video to take the decision to identify frame duplication attack in MPEG-4 video.
Abstract: This paper presents passive blind forgery detection to identify frame duplication attack in Moving Picture Experts Group-4 (MPEG-4) video. In this attack, one or more frames are copied and pasted at other location in the same video to hide or highlight particular activity. Since the tampered frames are from the same video, their statistical properties are uniform, which makes challenging to identify duplicate frames. In this paper, a two-step algorithm is proposed, in which the suspicious frames are identified, their features are extracted and compared with other frames of the test video to take the decision. Scale Invariant Feature Transform (SIFT) key-points are used as feature for comparison. Finally, Random Sample Consensus algorithm is used to locate duplicate frames. The proposed method is tested on compressed and uncompressed videos with variable compression rate. The simulation results show that the proposed scheme is competent to detect the tampered frames with 99.8% average accuracy. Comparative analysis is made for the proposed method with existing methods with respect to parameters Precision Rate (PR), Recall Rate (RR), Detection Accuracy (DA). The average values of PR, RR, and DA for the proposed method are 99.9%, 99.7%, 99.8% respectively, which are better than other methods. The proposed method needs average 33 seconds of simulation time, which is less as compared to other methods.

17 citations


Cites methods from "Video authentication in digital for..."

  • ...In this technique, the original video is not used to detect the tampering, hence it is called a passive blind forgery detection [1, 7, 17, 19]....

    [...]

Proceedings ArticleDOI
06 Dec 2016
TL;DR: The results show that the proposed algorithm has potential to be a reliable intelligent technique in digital video authentication without the need to use for SVM classifier which makes it faster and less computationally expensive in comparing with other intelligent techniques.
Abstract: With the outgrowth of video editing tools, video information trustworthiness becomes a hypersensitive field. Today many devices have the capability of capturing digital videos such as CCTV, digital cameras and mobile phones and these videos may transmitted over the Internet or any other non secure channel. As digital video can be used to as supporting evidence, it has to be protected against manipulation or tampering. As most video authentication techniques are based on watermarking and digital signatures, these techniques are effectively used in copyright purposes but difficult to implement in other cases such as video surveillance or in videos captured by consumer's cameras. In this paper we propose an intelligent technique for video authentication which uses the video local information which makes it useful for real world applications. The proposed algorithm relies on the video's statistical local information which was applied on a dataset of videos captured by a range of consumer video cameras. The results show that the proposed algorithm has potential to be a reliable intelligent technique in digital video authentication without the need to use for SVM classifier which makes it faster and less computationally expensive in comparing with other intelligent techniques.

6 citations


Cites methods from "Video authentication in digital for..."

  • ...Ankita Gupta et al., 2015 have explained two algorithms, the first algorithm to measures frame-by-frame motion energy time for the video which used in detecting the repeated video frames, while second algorithm computes the local information through the average object area and entropy of the video…...

    [...]

Journal ArticleDOI
TL;DR: Dalam hal kejahatan video biasanya dimanipulasi untuk menghilangkan bukti-bukti yang ada di dalamnya, oleh sebab itu diperlukan analisis forensik untuk dapat mendeteksi keaslian video tersebut.
Abstract: Video merupakan barang bukti digital yang salah satunya berasal dari handycam, dalam hal kejahatan video biasanya dimanipulasi untuk menghilangkan bukti-bukti yang ada di dalamnya, oleh sebab itu diperlukan analisis forensik untuk dapat mendeteksi keaslian video tersebut. Dalam penelitian ini di lakukan manipulasi video dengan attack cropping, zooming, rotation, dan grayscale, hal ini bertujuan untuk membandingkan antara rekaman video asli dan rekaman video tampering , dari rekaman video tersebut dianalisis dengan menggunakann metode localization tampering, yaitu metode deteksi yang menunjukkan bagian pada video yang telah dimanipulasi, dengan menganalisis frame, perhitungan histogram, dan grafik histogram. Dengan localization tampering tersebut maka dapat diketahui letak frame dan durasi pada video yang telah mengalami tampering .

4 citations


Cites background from "Video authentication in digital for..."

  • ...Peneliti [5] juga mengungkapkan ada dua pendekatan yang digunakan untuk mendeteksi video, yang pertama mendeteksi otentikasi video dan yang kedua mendeteksi video tampering....

    [...]

  • ...Peneliti [5] mengungkapkan ada dua pendekatan yang digunakan untuk mendeteksi video, yang pertama...

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Proceedings ArticleDOI
09 Nov 2003
TL;DR: This paper proposes a multiscale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions, and develops an image synthesis method to calibrate the parameters that define the relative importance of different scales.
Abstract: The structural similarity image quality paradigm is based on the assumption that the human visual system is highly adapted for extracting structural information from the scene, and therefore a measure of structural similarity can provide a good approximation to perceived image quality. This paper proposes a multiscale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions. We develop an image synthesis method to calibrate the parameters that define the relative importance of different scales. Experimental comparisons demonstrate the effectiveness of the proposed method.

4,333 citations

Journal ArticleDOI
TL;DR: An image information measure is proposed that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image and combined these two quantities form a visual information fidelity measure for image QA.
Abstract: Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Image QA algorithms generally interpret image quality as fidelity or similarity with a "reference" or "perfect" image in some perceptual space. Such "full-reference" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by signal fidelity measures. In this paper, we approach the image QA problem as an information fidelity problem. Specifically, we propose to quantify the loss of image information to the distortion process and explore the relationship between image information and visual quality. QA systems are invariably involved with judging the visual quality of "natural" images and videos that are meant for "human consumption." Researchers have developed sophisticated models to capture the statistics of such natural signals. Using these models, we previously presented an information fidelity criterion for image QA that related image quality with the amount of information shared between a reference and a distorted image. In this paper, we propose an image information measure that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image. Combining these two quantities, we propose a visual information fidelity measure for image QA. We validate the performance of our algorithm with an extensive subjective study involving 779 images and show that our method outperforms recent state-of-the-art image QA algorithms by a sizeable margin in our simulations. The code and the data from the subjective study are available at the LIVE website.

3,146 citations

Journal ArticleDOI
TL;DR: This paper presents results of an extensive subjective quality assessment study in which a total of 779 distorted images were evaluated by about two dozen human subjects and is the largest subjective image quality study in the literature in terms of number of images, distortion types, and number of human judgments per image.
Abstract: Measurement of visual quality is of fundamental importance for numerous image and video processing applications, where the goal of quality assessment (QA) algorithms is to automatically assess the quality of images or videos in agreement with human quality judgments. Over the years, many researchers have taken different approaches to the problem and have contributed significant research in this area and claim to have made progress in their respective domains. It is important to evaluate the performance of these algorithms in a comparative setting and analyze the strengths and weaknesses of these methods. In this paper, we present results of an extensive subjective quality assessment study in which a total of 779 distorted images were evaluated by about two dozen human subjects. The "ground truth" image quality data obtained from about 25 000 individual human quality judgments is used to evaluate the performance of several prominent full-reference image quality assessment algorithms. To the best of our knowledge, apart from video quality studies conducted by the Video Quality Experts Group, the study presented in this paper is the largest subjective image quality study in the literature in terms of number of images, distortion types, and number of human judgments per image. Moreover, we have made the data from the study freely available to the research community . This would allow other researchers to easily report comparative results in the future

2,598 citations

Journal ArticleDOI
TL;DR: DIIVINE is capable of assessing the quality of a distorted image across multiple distortion categories, as against most NR IQA algorithms that are distortion-specific in nature, and is statistically superior to the often used measure of peak signal-to-noise ratio (PSNR) and statistically equivalent to the popular structural similarity index (SSIM).
Abstract: Our approach to blind image quality assessment (IQA) is based on the hypothesis that natural scenes possess certain statistical properties which are altered in the presence of distortion, rendering them un-natural; and that by characterizing this un-naturalness using scene statistics, one can identify the distortion afflicting the image and perform no-reference (NR) IQA. Based on this theory, we propose an (NR)/blind algorithm-the Distortion Identification-based Image Verity and INtegrity Evaluation (DIIVINE) index-that assesses the quality of a distorted image without need for a reference image. DIIVINE is based on a 2-stage framework involving distortion identification followed by distortion-specific quality assessment. DIIVINE is capable of assessing the quality of a distorted image across multiple distortion categories, as against most NR IQA algorithms that are distortion-specific in nature. DIIVINE is based on natural scene statistics which govern the behavior of natural images. In this paper, we detail the principles underlying DIIVINE, the statistical features extracted and their relevance to perception and thoroughly evaluate the algorithm on the popular LIVE IQA database. Further, we compare the performance of DIIVINE against leading full-reference (FR) IQA algorithms and demonstrate that DIIVINE is statistically superior to the often used measure of peak signal-to-noise ratio (PSNR) and statistically equivalent to the popular structural similarity index (SSIM). A software release of DIIVINE has been made available online: http://live.ece.utexas.edu/research/quality/DIIVINE_release.zip for public use and evaluation.

1,501 citations