scispace - formally typeset
Journal ArticleDOI

Decision-theoretic model to identify printed sources

Reads0
Chats0
TLDR
The proposed decision-theoretical model can be very efficiently implemented for real world digital forensic applications and is superior to the existing approaches.
Abstract
When trying to identify a printed forged document, examining digital evidence can prove to be a challenge. Over the past several years, digital forensics for printed document source identification has begun to be increasingly important which can be related to the investigation and prosecution of many types of crimes. Unlike invasive forensic approach which requires a fraction of the printed document as the specimen for verification, noninvasive forensic technique uses the optical mechanism to explore the relationship between the scanned images and the source printer. To explore the relationship between source printers and images obtained by the scanner, the proposed decision-theoretical approach utilizes image processing techniques and data exploration methods to calculate many important statistical features, including: Local Binary Pattern (LBP), Gray Level Co-occurrence Matrix (GLCM), Discrete Wavelet Transform (DWT), Spatial filters, the Wiener filter, the Gabor filter, Haralick, and SFTA features. Consequently, the proposed aggregation method intensively applies the extracted features and decision-fusion model of feature selections for classification. In addition, the impact of different paper texture or paper color for printed sources identification is also investigated. In the meantime, the up-to-date techniques based on deep learning system is developed by Convolutional Neural Networks (CNNs) which can learn the features automatically to solve the complex image classification problem. Both systems have been compared and the experimental results indicate that the proposed system achieve the overall best accuracy prediction for image and text input and is superior to the existing approaches. In brief, the proposed decision-theoretical model can be very efficiently implemented for real world digital forensic applications.

read more

Citations
More filters
Journal ArticleDOI

Deep learning for printed document source identification

TL;DR: A deep learning system to solve the complex image classification problem is developed by Convolutional Neural Networks (CNNs) of deep learning which can learn the features automatically and should be constantly evaluated and compared for the best interest in universal utilization.
Journal ArticleDOI

Passive classification of source printer using text-line-level geometric distortion signatures from scanned images of printed documents

TL;DR: A set of features for characterizing text-line-level geometric distortions is proposed and a novel system to use them for identification of the origin of a printed document and gives much higher accuracy under small training size constraint is presented.
Journal ArticleDOI

A computational approach for printed document forensics using SURF and ORB features

TL;DR: A classifier-based model to identify the source printer and classify the questioned document in one of the printer classes is proposed and can efficiently classify the questioning documents to their respective printer class.
Journal ArticleDOI

Source Printer Classification Using Printer Specific Local Texture Descriptor

TL;DR: In this paper, a printer specific local texture descriptor is introduced to overcome the limitation that the font of letters present in the test documents of unknown origin must be available in those used for training the classifier.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Related Papers (5)