scispace - formally typeset
Search or ask a question

Showing papers on "Digital forensics published in 2014"


Journal ArticleDOI
TL;DR: A artefacts were identified that are likely to remain after the use of cloud storage, in the context of the experiments, on a computer hard drive and Apple iPhone3G, and the potential access point(s) for digital forensics examiners to secure evidence.

230 citations


Journal ArticleDOI
TL;DR: It is concluded that there remains a need for further research with a focus on real world applicability of a method or methods to address the digital forensic data volume challenge.

216 citations


Book
22 Jul 2014
TL;DR: The Art of Memory Forensics explains the latest technological innovations in digital forensics to help bridge the gap between malware and security breaches, and covers the most popular and recently released versions of Windows, Linux, and Mac.
Abstract: Memory forensics provides cutting edge technology to help investigate digital attacks Memory forensics is the art of analyzing computer memory (RAM) to solve digital crimes. As a follow-up to the best seller Malware Analyst's Cookbook, experts in the fields of malware, security, and digital forensics bring you a step-by-step guide to memory forensicsnow the most sought after skill in the digital forensics and incident response fields. Beginning with introductory concepts and moving toward the advanced, The Art of Memory Forensics: Detecting Malware and Threats in Windows, Linux, and Mac Memory is based on a five day training course that the authors have presented to hundreds of students. It is the only book on the market that focuses exclusively on memory forensics and how to deploy such techniques properly. Discover memory forensics techniques: How volatile memory analysis improves digital investigationsProper investigative steps for detecting stealth malware and advanced threatsHow to use free, open source tools for conducting thorough memory forensics Ways to acquire memory from suspect systems in a forensically sound manner The next era of malware and security breaches are more sophisticated and targeted, and the volatile memory of a computer is often overlooked or destroyed as part of the incident response process. The Art of Memory Forensics explains the latest technological innovations in digital forensics to help bridge this gap. It covers the most popular and recently released versions of Windows, Linux, and Mac, including both the 32 and 64-bit editions. Bonus materials include more than 20 real-world exercises, sample memory and code files, and even a formal presentation, syllabus, and test bank.

161 citations


Journal ArticleDOI
TL;DR: This paper summarizes the key aspects of cloud computing and analyses how established digital forensic procedures will be invalidated in this new environment and several new research challenges addressing this changing context are identified.
Abstract: Cloud computing is a rapidly evolving information technology (IT) phenomenon. Rather than procure, deploy and manage a physical IT infrastructure to host their software applications, organizations are increasingly deploying their infrastructure into remote, virtualized environments, often hosted and managed by third parties. This development has significant implications for digital forensic investigators, equipment vendors, law enforcement, as well as corporate compliance and audit departments (among others). Much of digital forensic practice assumes careful control and management of IT assets (particularly data storage) during the conduct of an investigation. This paper summarises the key aspects of cloud computing and analyses how established digital forensic procedures will be invalidated in this new environment. Several new research challenges addressing this changing context are also identified and discussed.

139 citations


Proceedings ArticleDOI
01 Oct 2014
TL;DR: An image forgery localization technique which fuses the outputs of three complementary tools, based on sensor noise, machine-learning and block-matching, respectively, which ranked first in phase 2 of the first Image Forensics Challenge organized in 2013.
Abstract: We propose an image forgery localization technique which fuses the outputs of three complementary tools, based on sensor noise, machine-learning and block-matching, respectively. To apply the sensor noise tool, a preliminary camera identification phase was required, followed by estimation of the camera fingerprint, and then forgery detection and localization. The machine-learning is based on a suitable local descriptor, while block-matching relies on the PatchMatch algorithm. A decision fusion strategy is then implemented, based on suitable reliability indexes associated with the binary masks. The proposed technique ranked first in phase 2 of the first Image Forensics Challenge organized in 2013 by the IEEE Information Forensics and Security Technical Committee (IFS-TC).

102 citations


ReportDOI
15 May 2014
TL;DR: This guide attempts to bridge the gap by providing an indepth look into mobile devices and explaining technologies involved and their relationship to forensic procedures.
Abstract: Mobile device forensics is the science of recovering digital evidence from a mobile device under forensically sound conditions using accepted methods. Mobile device forensics is an evolving specialty in the field of digital forensics. This guide attempts to bridge the gap by providing an indepth look into mobile devices and explaining technologies involved and their relationship to forensic procedures. This document covers mobile devices with features beyond simple voice communication and text messaging capabilities. This guide also discusses procedures for the validation, preservation, acquisition, examination, analysis, and reporting of digital information.

101 citations


Journal ArticleDOI
TL;DR: This paper conducts an in-depth forensic experiment on XtreemFS, a Contrail EU-funded project, as a case study for distributed filesystem forensics, and proposes a process for the collection of evidential data from distributed filesystems.

84 citations


Journal ArticleDOI
TL;DR: This paper explores the current implementation of the digital forensic process and analyze factors that impact the efficiency of this process and explains how in the Netherlands a Digital Forensics as a Service implementation reduced case backlogs and freed up digital investigators to help detectives better understand the digital material.

84 citations


Journal Article
TL;DR: Development trends in computer forensics and security in various aspects are looked into to determine the trend of solutions in these aspects and evaluate their effectiveness in resolving the aforementioned issues.
Abstract: Privacy issues have always been a major concern in computer forensics and security and in case of any investigation whether it is pertaining to computer or not always privacy issues appear. To enable privacy’s protection in the physical world we need the law that should be legislated, but in a digital world by rapidly growing of technology and using the digital devices more and more that generate a huge amount of private data it is impossible to provide fully protected space in cyber world during the transfer, store and collect data. Since its introduction to the field, forensics investigators, and developers have faced challenges in finding the balance between retrieving key evidences and infringing user privacy. This paper looks into developmental trends in computer forensics and security in various aspects in achieving such a balance. In addition, the paper analyses each scenario to determine the trend of solutions in these aspects and evaluate their effectiveness in resolving the aforementioned issues.

63 citations


Journal ArticleDOI
TL;DR: The proposed approach is based on a model which integrates knowledge of experts from the fields of digital forensics and software development to allow a semantically rich representation of events related to the incident to allow the analysis of these events in an automatic and efficient way.

61 citations


Posted Content
TL;DR: A Digital Forensic Data Reduction and Data Mining Framework is proposed to provide a rapid triage, collection, intelligence analysis, review and storage methodology to support the various stages of digital forensic examinations.
Abstract: The volume of digital forensic evidence is rapidly increasing, leading to large backlogs. In this paper, a Digital Forensic Data Reduction and Data Mining Framework is proposed. Initial research with sample data from South Australia Police Electronic Crime Section and Digital Corpora Forensic Images using the proposed framework resulted in significant reduction in the storage requirements — the reduced subset is only 0.196 percent and 0.75 percent respectively of the original data volume. The framework outlined is not suggested to replace full analysis, but serves to provide a rapid triage, collection, intelligence analysis, review and storage methodology to support the various stages of digital forensic examinations. Agencies that can undertake rapid assessment of seized data can more effectively target specific criminal matters. The framework may also provide a greater potential intelligence gain from analysis of current and historical data in a timely manner, and the ability to undertake research of trends over time.

Proceedings ArticleDOI
23 Jul 2014
TL;DR: Establishment needed foundations are established and detailed definition of "privacy-respecting digital investigation" as a new cross-disciplinary field of research is provided, and detailed discussion of potential privacy issues in different phases of digital forensics life cycle based on EU, US, and APEC privacy regulations are discussed.
Abstract: The forensics investigation requirements are in direct conflict with the privacy rights of those whose actions are being investigated. At the same time, once the private data is exposed it is impossible to `undo' its exposure effects should the suspect is found innocent! Moreover, it is not uncommon that during a suspect investigation, private information of other innocent parties becomes apparent to the forensics investigator. These all raise the concern for development of platforms for enforcing privacy boundaries even to authorized forensics investigators. To the best of authors' knowledge, there is no practical model for privacy-respecting digital investigation which is capable of considering different jurisdictions requirements and protecting subjects' data privacy in line with investigation warrant permissions and data-origin privacy requirements. Privacy-respecting digital forensics as an emerging cross-disciplinary research area is moving toward addressing above issues. In this paper, we first establish needed foundations and describe details of "privacy-respecting digital investigation" as a cross-disciplinary field of research. Afterwards, we review main research efforts in different research disciplines relevant to the field and elaborate existing research problems. We finalize the paper by looking at potential privacy issues during digital investigation in the light of EU, US, and APEC privacy regulations. The main contributions of this paper are first establishing essential foundations and providing detailed definition of "privacy-respecting digital investigation" as a new cross-disciplinary field of research, second a review of current state of art in different disciplines relevant to this field, third elaborating existing issues and discussing most promising solutions relevant to these disciplines, and forth is detailed discussion of potential privacy issues in different phases of digital forensics life cycle based on EU,US, and APEC privacy regulations. We hope this paper opens up a new and fruitful avenue in the study, design, and development of privacy respecting forensics investigation as an interdisciplinary field of research.

Book ChapterDOI
16 Jun 2014
TL;DR: This paper addresses the issues of the cloud forensics challenges identified from review conducted in the respective area and moves to a new model assigning the aforementioned challenges to stages.
Abstract: One of the most important areas in the developing field of cloud computing is the way that investigators conduct researches in order to reveal the ways that a digital crime took place over the cloud. This area is known as cloud forensics. While great research on digital forensics has been carried out, the current digital forensic models and frameworks used to conduct a digital investigation don’t meet the requirements and standards demanded in cloud forensics due to the nature and characteristics of cloud computing. In parallel, issues and challenges faced in traditional forensics are different to the ones of cloud forensics. This paper addresses the issues of the cloud forensics challenges identified from review conducted in the respective area and moves to a new model assigning the aforementioned challenges to stages.

Journal ArticleDOI
TL;DR: Current research in the field of cloud forensics is reviewed, with a focus on "forensics in the cloud"- that is, cloud computing as an evidence source for forensic investigations.
Abstract: As cloud computing becomes more prevalent, there is a growing need for forensic investigations involving cloud technologies. The field of cloud forensics seeks to address the challenges to digital forensics presented by cloud technologies. This article reviews current research in the field of cloud forensics, with a focus on "forensics in the cloud"--that is, cloud computing as an evidence source for forensic investigations.

01 Jan 2014
TL;DR: The authors proposed a model that allows digital forensic readiness to be achieved by implementing a Botnet as a service (BaaS) in a cloud environment.
Abstract: Cloud forensics has become an inexorable and a transformative discipline in the modern world. The need to share a pool of resources and to extract digital evidence from the same distributed resources to be presented in a court of law, has become a subject of focus. Forensic readiness is a pro-active process that entails digital preparedness that an organisation uses to gather, store and handle incident responsive data with the aim of reducing post-event response by digital forensics investigators. Forensic readiness in the cloud can be achieved by implementing a botnet with nonmalicious code as opposed to malicious code. The botnet still infects instances of virtual computers within the cloud, however, with good intentions as opposed to bad intentions. The botnet is, effectively, implemented as a service that harvests digital information that can be preserved as admissible and submissive potential digital evidence. In this paper, the authors‟ problem is that there are no techniques that exist for gathering information in the cloud for digital forensic readiness purposes as described in international standard for digital forensic investigations (ISO/IEC 27043). The authors proposed a model that allows digital forensic readiness to be achieved by implementing a Botnet as a service (BaaS) in a cloud environment.

ReportDOI
02 Jul 2014
TL;DR: The purpose of this document is to provide a definition and terminology to describe approximate matching in order to promote discussion, research, tool development and tool acquisition.
Abstract: This document provides a definition of and terminology for approximate matching Approximate matching is a promising technology designed to identify similarities between two digital artifacts It is used to find objects that resemble each other or to find objects that are contained in another object This can be very useful for filtering data for security monitoring, digital forensics, or other applications The purpose of this document is to provide a definition and terminology to describe approximate matching in order to promote discussion, research, tool development and tool acquisition

Journal Article
TL;DR: In this article, the authors proposed a data reduction and data mining framework that incorporates a process of reducing data volume by focusing on a subset of information, which can provide a rapid triage, collection, intelligence analysis, review and storage methodology to support the various stages of digital forensic examinations.
Abstract: With the volume of digital forensic evidence rapidly increasing, this paper proposes a data reduction and data mining framework that incorporates a process of reducing data volume by focusing on a subset of information. Foreword The volume of digital forensic evidence is rapidly increasing, leading to large backlogs. In this paper, a Digital Forensic Data Reduction and Data Mining Framework is proposed. Initial research with sample data from South Australia Police Electronic Crime Section and Digital Corpora Forensic Images using the proposed framework resulted in significant reduction in the storage requirements—the reduced subset is only 0.196 percent and 0.75 percent respectively of the original data volume. The framework outlined is not suggested to replace full analysis, but serves to provide a rapid triage, collection, intelligence analysis, review and storage methodology to support the various stages of digital forensic examinations. Agencies that can undertake rapid assessment of seized data can more effectively target specific criminal matters. The framework may also provide a greater potential intelligence gain from analysis of current and historical data in a timely manner, and the ability to undertake research of trends over time.

Journal ArticleDOI
TL;DR: It is argued that more intelligent techniques are necessary and should be used proactively and by applying new techniques to digital investigations there is the opportunity to address the challenges of the larger and more complex domains in which cybercrimes are taking place.
Abstract: In this paper we posit that current investigative techniques—particularly as deployed by law enforcement, are becoming unsuitable for most types of crime investigation. The growth in cybercrime and the complexities of the types of the cybercrime coupled with the limitations in time and resources, both computational and human, in addressing cybercrime put an increasing strain on the ability of digital investigators to apply the processes of digital forensics and digital investigations to obtain timely results. In order to combat the problems, there is a need to enhance the use of the resources available and move beyond the capabilities and constraints of the forensic tools that are in current use. We argue that more intelligent techniques are necessary and should be used proactively. The paper makes the case for the need for such tools and techniques, and investigates and discusses the opportunities afforded by applying principles and procedures of artificial intelligence to digital forensics intelligence and to intelligent forensics and suggests that by applying new techniques to digital investigations there is the opportunity to address the challenges of the larger and more complex domains in which cybercrimes are taking place.

Proceedings ArticleDOI
27 Mar 2014
TL;DR: The results indicate that the relation between the time taken for image acquisition and different storage volumes is not linear, owing to several factors affecting remote acquisition, especially over the Internet.
Abstract: The essentially infinite storage space offered by Cloud Computing is quickly becoming a problem for forensics investigators in regards to evidence acquisition, forensic imaging and extended time for data analysis. It is apparent that the amount of stored data will at some point become impossible to practically image for the forensic investigators to complete a full investigation. In this paper, we address these issues by determining the relationship between acquisition times on the different storage capacities, using remote acquisition to obtain data from virtual machines in the cloud. A hypothetical case study is used to investigate the importance of using a partial and full approach for acquisition of data from the cloud and to determine how each approach affects the duration and accuracy of the forensics investigation and outcome. Our results indicate that the relation between the time taken for image acquisition and different storage volumes is not linear, owing to several factors affecting remote acquisition, especially over the Internet. Performing the acquisition using cloud resources showed a considerable reduction in time when compared to the conventional imaging method. For a 30GB storage volume, the least time was recorded for the snapshot functionality of the cloud and dd command. The time using this method is reduced by almost 77 percent. FTK Remote Agent proved to be most efficient showing an almost 12 percent reduction in time over other methods of acquisition. Furthermore, the timelines produced with the help of the case study, showed that the hybrid approach should be preferred to complete approach for performing acquisition from the cloud, especially in time critical scenarios.

Journal ArticleDOI
TL;DR: A conceptual framework for organizational forensic readiness is developed and future work towards the empirical validation and refinement of the framework is defined.
Abstract: Although digital forensics has traditionally been associated with law enforcement, the impact of new regulations, industry standards and cyber-attacks, combined with a heavy reliance on digital assets, has resulted in a more prominent role for digital forensics in organizations. Modern organizations, therefore, need to be forensically ready in order to maximize their potential to respond to forensic events and demonstrate compliance with laws and regulations. However, little research exists on the assessment of organizational digital forensic readiness. This paper describes a comprehensive approach to identifying the factors that contribute to digital forensic readiness and how these factors work together to achieve forensic readiness in an organization. We develop a conceptual framework for organizational forensic readiness and define future work towards the empirical validation and refinement of the framework.

Journal ArticleDOI
TL;DR: The client application, its detected network traffic and identifies artefacts that may be of value as evidence for future digital investigations are outlined.

Proceedings ArticleDOI
03 Jun 2014
TL;DR: This paper combines a novel skin detection approach with machine learning techniques to alleviate manual image screening and upgrades previous approaches by leveraging machine learning and introducing several novel methods to enhance detection rates.
Abstract: Digital forensics experts are increasingly confronted with investigating large amounts of data and judging if it contains digital contraband. In this paper, we present an adaptable solution for detecting nudity or pornography in color images. We combine a novel skin detection approach with machine learning techniques to alleviate manual image screening. We upgrade previous approaches by leveraging machine learning and introducing several novel methods to enhance detection rates. Our nudity assessment uses skin detection and positioning of skin areas within a picture. Sizes, shapes and placements of detected skin regions as well as the total amount of skin in an image are used as features for a support vector machine that finally classifies the image as non-pornographic or pornographic. With a recall of 65.7% and 6.4% false positive rate, our approach outperforms the best reported detection approaches.

Journal ArticleDOI
01 May 2014
TL;DR: Case-Based Reasoning Forensic Triager (CBR-FT) is a method for collecting and reusing past digital forensic investigation information in order to highlight likely evidential areas on a suspect operating system, thereby helping an investigator to decide where to search for evidence.
Abstract: The role of triage in digital forensics is disputed, with some practitioners questioning its reliability for identifying evidential data. Although successfully implemented in the field of medicine, triage has not established itself to the same degree in digital forensics. This article presents a novel approach to triage for digital forensics. Case-Based Reasoning Forensic Triager (CBR-FT) is a method for collecting and reusing past digital forensic investigation information in order to highlight likely evidential areas on a suspect operating system, thereby helping an investigator to decide where to search for evidence. The CBR-FT framework is discussed and the results of twenty test triage examinations are presented. CBR-FT has been shown to be a more effective method of triage when compared to a practitioner using a leading commercial application.

01 Jan 2014
TL;DR: The research in this thesis forms an extensive case study in the application of MDSE in the domain of automated digital forensics, using the Rascal metaprogramming language, and provides concrete evidence for the successful application.
Abstract: Digital forensics concerns the acquisition, recovery and analysis of information on digital devices to answer legal questions. Exponential increases in available storage, as well as growing device adoption by the public, have made manual inspection of all information infeasible. A solution is automated digital forensics, which is the use of software to perform tasks in digital forensics automatically, reducing the time required. Software engineering techniques exist to construct high performance solutions. However, one requirement complicates the application of standard techniques: handling the high variability in how investigated information is stored. The number of different devices and applications is huge and constantly changing. This leads to a constant stream of required changes to digital forensics software in order to recover as much information as possible. Factoring out commonality so that the changing aspects of a solution can evolve separately is a supposed strength of model-driven software engineering (MDSE). This separation of concerns is achieved through the use of a domain-specific language (DSL). Changes expressed in this DSL are then automatically applied through the use of transformation tools, which handle fixed requirements such as high performance. The research in this thesis forms an extensive case study in the application of MDSE in the domain of automated digital forensics, using the Rascal metaprogramming language. It provides concrete evidence for the successful application of MDSE in automated digital forensics, and contributes to knowledge about the application of MDSE in general. The implementations illustrate the usefulness of Rascal in DSL engineering.

Journal ArticleDOI
TL;DR: The evolution of the digital forensic; its origins, its current position and its future directions are studied, followed by the assessment and analysis of current state of art in both industrial and academic digital forensics research.
Abstract: 1. ABSTRACT Digital forensic has evolved from addressing minor computer crimes to investigation of complex international cases with massive effect on the world. This paper studies the evolution of the digital forensic; its origins, its current position and its future directions. This paper sets the scene with exploring past literature on digital forensic approaches followed by the assessment and analysis of current state of art in both industrial and academic digital forensics research. The obtained results are compared and analyzed to provide a comprehensive view of the current digital forensics landscape. Furthermore, this paper highlights critical digital forensic issues that are being overlooked and not being addressed as deserved. The paper finally concludes with offering future research directions in this area.

Journal Article
TL;DR: The Daubert standard provides judges with an objective set of guidelines for accepting scientific evidence, and is the key to establishing a timeline and correlating important events in cases where information is hidden, erased, or otherwise altered.
Abstract: ¶1 With the widespread permeation of continually advancing technologies into our daily lives, it is inevitable that the product of those technologies, i.e. digital information, makes its way into the courtroom. This has largely occurred in the form of electronic discovery, or "e-discovery," where each party involved in an action provides the relevant information they possess electronically. However, in cases where information is hidden, erased, or otherwise altered, digital forensic analysis is necessary to draw further conclusions about the available evidence.1 Digital forensic analysis is analogous to more traditional forensic analysis. For example, in criminal cases where a firearm was used in the commission of the crime, but the gun is not readily admissible,2 forensic science is necessary to trace the origin of the weapon, perform fingerprint analysis on it, and compare fired bullet casings to ensure the weapon used and the weapon analyzed are one and the same.3¶2 In sum, digital forensics is the preservation and analysis of electronic data.4 These data include the primary substantive data (the gun) and the secondary data attached to the primary data, such as data trails and time/date stamps (the fingerprints).5 These data trails and other metadata markers are often the key to establishing a timeline and correlating important events.6I. A BRIEF HISTORY OF DIGITAL FORENSICS¶3 A forensic report, whether for digital evidence or physical evidence, must have conclusions that are reproducible by independent third parties.7 So, facts discovered and opinions formed need to be documented and referenced to their sources. Why? Ones and zeros do not lie. Therefore, forensic reports that contain opinions based upon properly documented digital sources are much more likely to withstand judicial scrutiny than are opinions based on less reliable sources.8¶4 The reigning case in scientific evidence admission is Daubert v. Merrell Dow Pharmaceuticals Inc.9 The decision in Daubert set forth a five-pronged standard for judges to determine whether scientific evidence is admissible in federal court. The Daubert standard applies to any scientific procedure used to prepare or uncover evidence and comprises the following factors:(1) Testing: Has the scientific procedure been independently tested?(2) Peer Review: Has the scientific procedure been published and subjected to peer review?(3) Error rate: Is there a known error rate, or potential to know the error rate, associated with the use of the scientific procedure?(4) Standards: Are there standards and protocols for the execution of the methodology of the scientific procedure?10(5) Acceptance: Is the scientific procedure generally accepted by the relevant scientific community?¶5 The Daubert standard provides judges with an objective set of guidelines for accepting scientific evidence. Following Daubert, the decision in Kumho Tire v. Carmichael11 extended the Daubert standard to the qualification of expert witnesses by its interpretation of Federal Rule of Evidence ("FRE") 702. FRE 702 provides guidelines for qualifying expert witnesses, stating that the expert can have "scientific, technical, or other specialized knowledge." The Kumho Tire court extended the Daubert standard to apply to experts with technical or specialized knowledge, and not simply those called to testify regarding their scientific knowledge.¶6 The majority of jurisdictions in the country favor the Daubert standard over the "general accepted practices" standard set forth in Frye v. United States, 293 F. 1013 (1923).12 For jurisdictions in which Daubert is followed, there are a number of practical points that both attorneys and judges will benefit from knowing in order to understand and effectuate the guidelines set forth in the Daubert standard. This article's goal is to elucidate those practical high-level points, thereby allowing counsel or judge to review technical expert reports and spot potential weaknesses. …

Journal ArticleDOI
TL;DR: The curriculum was designed with the express intent of distributing it as a self-contained curriculum package with everything needed to teach the course, and the revisions made based on this experience and feedback from students are described.

Journal ArticleDOI
TL;DR: This state-of-the-art review includes the most recent research eorts that used \cloud forensics" as a keyword and classify the literature into three dimensions: (1) survey-based, (2) technology-based and (3) forensics-procedural-based.
Abstract: Cloud computing and digital forensics are emerging elds of technology. Unlike traditional digital forensics where the target environment can be almost completely isolated, acquired and can be under the investigators control; in cloud environments, the distribution of computation and storage poses unique and complex challenges to the investigators. Recently, the term \cloud forensics" has an increasing presence in the eld of digital forensics. In this state-of-the-art review, we included the most recent research eorts that used \cloud forensics" as a keyword and then classify the literature into three dimensions: (1) survey-based, (2) technology-based and (3) forensics-procedural-based. We discuss widely accepted standard bodies and their eorts to address the current trend of cloud forensics. Our aim is not only to reference related work based on the discussed dimensions, but also to analyse them and generate a mind map that will help in identifying research gaps. Finally, we summarize existing digital forensics tools and the available simulation environments that can be used for evidence acquisition, examination and cloud forensics test purposes.

Book ChapterDOI
01 Jan 2014
TL;DR: This chapter performs an experimental study on a forensics data task for multi-class classification including several types of methods such as decision trees, bayes classifiers, based on rules, artificial neural networks and based on nearest neighbors.
Abstract: Digital forensics research includes several stages. Once we have collected the data the last goal is to obtain a model in order to predict the output with unseen data. We focus on supervised machine learning techniques. This chapter performs an experimental study on a forensics data task for multi-class classification including several types of methods such as decision trees, bayes classifiers, based on rules, artificial neural networks and based on nearest neighbors. The classifiers have been evaluated with two performance measures: accuracy and Cohen’s kappa. The followed experimental design has been a 4-fold cross validation with thirty repetitions for non-deterministic algorithms in order to obtain reliable results, averaging the results from 120 runs. A statistical analysis has been conducted in order to compare each pair of algorithms by means of t-tests using both the accuracy and Cohen’s kappa metrics.

Journal ArticleDOI
01 Dec 2014
TL;DR: The superior testing performance demonstrates that the proposed identification method is very useful for source laser printer identification and explores the optimum feature subset by using feature selection techniques and use support vector machine (SVM) to identify the source model of the documents.
Abstract: Recently, digital forensics, which involves the collection and analysis of the origin digital device, has become an important issue. Digital content can play a crucial role in identifying the source device, such as serve as evidence in court. To achieve this goal, we use different texture feature extraction methods such as graylevel co-occurrence matrix (GLCM) and discrete wavelet transform (DWT), to analyze the Chinese printed source in order to find the impact of different output devices. Furthermore, we also explore the optimum feature subset by using feature selection techniques and use support vector machine (SVM) to identify the source model of the documents. The average experimental results attain a 98.64 % identification rate which is significantly superior to the existing known method of GLCM by 1.27 %. The superior testing performance demonstrates that the proposed identification method is very useful for source laser printer identification.