scispace - formally typeset
Search or ask a question

Showing papers on "Digital forensics published in 2018"


Journal ArticleDOI
TL;DR: This paper first introduces existing major security and forensics challenges within IoT domain and then briefly discusses about papers published in this special issue targeting identified challenges.

442 citations


Proceedings ArticleDOI
01 Feb 2018
TL;DR: Research to identify methods for performing IoT-based digital forensic analysis is essential and the long-term goal is the development of digital forensic standards that can be used as part of overall IoT and IoA security and aid IoT- based investigations.
Abstract: Challenges for IoT-based forensic investigations include the increasing amount of objects of forensic interest, relevance of identified and collected devices, blurry network boundaries, and edgeless networks. As we look ahead to a world of expanding ubiquitous computing, the challenge of forensic processes such as data acquisition (logical and physical) and extraction and analysis of data grows in this space. Containing an IoT breach is increasingly challenging - evidence is no longer restricted to a PC or mobile device, but can be found in vehicles, RFID cards, and smart devices. Through the combination of cloud-native forensics with client-side forensics (forensics for companion devices), we can study and develop the connection to support practical digital investigations and tackle emerging challenges in digital forensics. With the IoT bringing investigative complexity, this enhances challenges for the Internet of Anything (IoA) era. IoA brings anything and everything "online" in a connectedness that generates an explosion of connected devices, from fridges, cars and drones, to smart swarms, smart grids and intelligent buildings. Research to identify methods for performing IoT-based digital forensic analysis is essential. The long-term goal is the development of digital forensic standards that can be used as part of overall IoT and IoA security and aid IoT-based investigations.

84 citations


Journal ArticleDOI
01 Jun 2018
TL;DR: This literature review will discuss and present various steganalysis techniques – from earlier ones to state of the art- used for detection of hidden data embedded in digital images using various Steganography techniques.
Abstract: Steganalysis and steganography are the two different sides of the same coin. Steganography tries to hide messages in plain sight while steganalysis tries to detect their existence or even more to retrieve the embedded data. Both steganography and steganalysis received a great deal of attention, especially from law enforcement. While cryptography in many countries is being outlawed or limited, cyber criminals or even terrorists are extensively using steganography to avoid being arrested with encrypted incriminating material in their possession. Therefore, understanding the ways that messages can be embedded in a digital medium –in most cases in digital images-, and knowledge of state of the art methods to detect hidden information, is essential in exposing criminal activity. Digital image steganography is growing in use and application. Many powerful and robust methods of steganography and steganalysis have been presented in the literature over the last few years. In this literature review, we will discuss and present various steganalysis techniques – from earlier ones to state of the art- used for detection of hidden data embedded in digital images using various steganography techniques.

83 citations


Journal ArticleDOI
TL;DR: An overview of the current state-of-the-art (SOA) in digital audio forensics is provided and some open research problems and future challenges in this active area of research are highlighted.
Abstract: Digital audio forensics is used for a variety of applications ranging from authenticating audio files to link an audio recording to the acquisition device (e.g., microphone), and also linking to the acoustic environment in which the audio recording was made, and identifying traces of coding or transcoding. This survey paper provides an overview of the current state-of-the-art (SOA) in digital audio forensics and highlights some open research problems and future challenges in this active area of research. The paper categorizes the audio file analysis into container and content-based analysis in order to detect the authenticity of the file. Existing SOA, in audio forensics, is discussed based on both container and content-based analysis. The importance of this research topic has encouraged many researchers to contribute in this area; yet, further scopes are available to help researchers and readers expand the body of knowledge. The ultimate goal of this paper is to introduce all information on audio forensics and encourage researchers to solve the unanswered questions. Our survey paper would contribute to this critical research area, which has addressed many serious cases in the past, and help solve many more cases in the future by using advanced techniques with more accurate results.

63 citations


Posted Content
TL;DR: IoT devices and sensors have been utilized in a cooperative manner to enable the concept of a smart environment to contain valuable forensic information about events and actions occurring inside the smart environment and, if analyzed, may help hold those violating security policies accountable.
Abstract: IoT devices and sensors have been utilized in a cooperative manner to enable the concept of a smart environment. In these smart settings, abundant data is generated as a result of the interactions between devices and users' day-to-day activities. Such data contain valuable forensic information about events and actions occurring inside the smart environment and, if analyzed, may help hold those violating security policies accountable. In this paper, we introduce IoTDots, a novel digital forensic framework for a smart environment such as smart homes and smart offices. IoTDots has two main components: IoTDots-Modifier and IoTDots-Analyzer. At compile time, IoTDots-Modifier performs the source code analysis of smart apps, detects forensically-relevant information, and automatically insert tracing logs. Then, at runtime, the logs are stored into a IoTDots database. Later, in the event of a forensic investigation, the IoTDots-Analyzer applies data processing and machine learning techniques to extract valuable and usable forensic information from the devices' activity. In order to test the performance of IoTDots, we tested IoTDots in a realistic smart office environment with a total of 22 devices and sensors. The evaluation results show that IoTDots can achieve, on average, over 98% of accuracy on detecting user activities and over 96% accuracy on detecting the behavior of users, devices, and apps in a smart environment. Finally, IoTDots performance yields no overhead to the smart devices and very minimal overhead to the cloud server.

54 citations


Journal ArticleDOI
TL;DR: A framework for entity identification and open source information cohesion to add value to data holdings from digital forensic data subsets is presented and the results demonstrate the benefits of applying the process to achieve greater understanding ofdigital forensic data in a timely manner.

54 citations


Journal ArticleDOI
01 May 2018
TL;DR: This technical‐legal interaction and how it might inform as to privacy and security with the IoT and the Smart City is examined.

52 citations


Journal ArticleDOI
TL;DR: The effectiveness of the design and implementation of a feasible technique for performing Digital Forensic Readiness (DFR) in cloud computing environments is presented as the easiest way of conducting DFR in the cloud environment as stipulated in the ISO/IEC 27043: 2015 international standard.
Abstract: This paper examines the design and implementation of a feasible technique for performing Digital Forensic Readiness (DFR) in cloud computing environments. The approach employs a modified obfuscated...

49 citations


Journal ArticleDOI
TL;DR: The development of best practices, reliable tools and the formulation of formal testing methods for digital forensic techniques are highlighted which could be extremely useful and of immense value to improve the trustworthiness of electronic evidence in legal proceedings.
Abstract: Digital forensics is a vital part of almost every criminal investigation given the amount of information available and the opportunities offered by electronic data to investigate and evidence a crime. However, in criminal justice proceedings, these electronic pieces of evidence are often considered with the utmost suspicion and uncertainty, although, on occasions are justifiable. Presently, the use of scientifically unproven forensic techniques are highly criticized in legal proceedings. Nevertheless, the exceedingly distinct and dynamic characteristics of electronic data, in addition to the current legislation and privacy laws remain as challenging aspects for systematically attesting evidence in a court of law. This article presents a comprehensive study to examine the issues that are considered essential to discuss and resolve, for the proper acceptance of evidence based on scientific grounds. Moreover, the article explains the state of forensics in emerging sub-fields of digital technology such as, cloud computing, social media, and the Internet of Things (IoT), and reviewing the challenges which may complicate the process of systematic validation of electronic evidence. The study further explores various solutions previously proposed, by researchers and academics, regarding their appropriateness based on their experimental evaluation. Additionally, this article suggests open research areas, highlighting many of the issues and problems associated with the empirical evaluation of these solutions for immediate attention by researchers and practitioners. Notably, academics must react to these challenges with appropriate emphasis on methodical verification. Therefore, for this purpose, the issues in the experiential validation of practices currently available are reviewed in this study. The review also discusses the struggle involved in demonstrating the reliability and validity of these approaches with contemporary evaluation methods. Furthermore, the development of best practices, reliable tools and the formulation of formal testing methods for digital forensic techniques are highlighted which could be extremely useful and of immense value to improve the trustworthiness of electronic evidence in legal proceedings.

49 citations


Proceedings ArticleDOI
02 Jul 2018
TL;DR: FIF-IoT presents a framework that ensures integrity, confidentiality, anonymity, and non-repudiation of the evidence stored in the public digital ledger, and provides a mechanism to acquire evidence from the ledger and to verify the integrity of the obtained evidence.
Abstract: The increased deployment of Internet of Things (IoT) devices will make them targets for attacks. IoT devices can also be used as tools for committing crimes. In this regard, we propose FIF-IoT – a forensic investigation framework using a public digital ledger to find facts in criminal incidents in IoT-based systems. FIF-IoT collects interactions that take place among various IoT entities (clouds, users, and IoT devices) as evidence and store them securely as transactions in a public, distributed and decentralized blockchain network which is similar to the Bitcoin network. Hence, FIF-IoT eliminates a single entity's control over the evidence storage, avoids single-point-offailure on the storage media, and ensures high availability of evidence. FIF-IoT presents a framework that ensures integrity, confidentiality, anonymity, and non-repudiation of the evidence stored in the public digital ledger. Furthermore, FIF-IoT provides a mechanism to acquire evidence from the ledger and to verify the integrity of the obtained evidence. We present a case study of a forensic investigation to demonstrate that FIF-IoT is secure against evidence tampering. We also implement a prototype to evaluate the performance of FIF-IoT.

47 citations


Journal ArticleDOI
TL;DR: A process of data reduction by selective imaging and quick analysis, coupled with automated data extraction, gives potential to undertake the analysis of the growing volume of data in a timely manner.
Abstract: The growth in the prevalence of the plethora of digital devices has resulted in growing volumes of disparate data, with potential relevance to criminal and civil investigations. With the increase in data volume, there is an opportunity to build greater case-related knowledge and discover evidence, with implications at all stages of the digital forensic analysis process. The growth in digital devices will potentially further contribute to the growth in big digital forensic data, with a need for practitioners to consider a wider range of data and devices that may be relevant to an investigation. A process of data reduction by selective imaging and quick analysis, coupled with automated data extraction, gives potential to undertake the analysis of the growing volume of data in a timely manner. In this paper, we outline a process of bulk digital forensic data analysis including disparate device data. We research the process with a research data corpus and apply our process to real-world data. The challenges of the growing volume of devices and data will require forensic practitioners to expand their ability to undertake research into newly developed data structures, and be able to explain this to the court, judge, jury, and investigators.

Journal ArticleDOI
TL;DR: This research addresses the challenges faced when an Agent-Based Solution (ABS) is used in the cloud to extract Potential Digital Evidence (PDE) for DFR purposes by assessing the possible solutions from a general, technical and operational point of view.
Abstract: The need to perform digital investigations has, over the years, led to the exponential growth of the field of Digital Forensics (DF). However, quite a number of challenges face the act of proving – for purposes of Digital Forensic Readiness (DFR) – that an electronic event has occurred in cyberspace. The problem that this research addresses involves the challenges faced when an Agent-Based Solution (ABS) is used in the cloud to extract Potential Digital Evidence (PDE) for DFR purposes. Throughout the paper the authors have modified the functionality of an initially malicious botnet to act as a distributed forensic agent to conduct this process. The paper focuses on the general, technical and operational challenges that are encountered when trying to achieve DFR in the cloud environment. The authors finally propose a contribution by assessing the possible solutions from a general, technical and operational point of view.

Journal ArticleDOI
TL;DR: The goal of this paper is to determine which copy-move forgery detection methods are best for different image attributes such as JPEG compression, scaling, rotation.
Abstract: Authenticating digital images is increasingly becoming important because digital images carry important information and due to their use in different areas such as courts of law as essential pieces of evidence. Nowadays, authenticating digital images is difficult because manipulating them has become easy as a result of powerful image processing software and human knowledge. The importance and relevance of digital image forensics has attracted various researchers to establish different techniques for detection in image forensics. The core category of image forensics is passive image forgery detection. One of the most important passive forgeries that affect the originality of the image is copy-move digital image forgery, which involves copying one part of the image onto another area of the same image. Various methods have been proposed to detect copy-move forgery that uses different types of transformations. The goal of this paper is to determine which copy-move forgery detection methods are best for different image attributes such as JPEG compression, scaling, rotation. The advantages and drawbacks of each method are also highlighted. Thus, the current state-of-the-art image forgery detection techniques are discussed along with their advantages and drawbacks.

Journal ArticleDOI
TL;DR: This research presents a meta-modelling architecture that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process of manually cataloging and cataloging individual elements of a system.
Abstract: Contemporary mobile devices are the result of an evolution process, during which computational and networking capabilities have been continuously pushed to keep pace with the constantly growing workload requirements. This has allowed devices such as smartphones, tablets, and personal digital assistants to perform increasingly complex tasks, up to the point of efficiently replacing traditional options such as desktop computers and notebooks. However, due to their portability and size, these devices are more prone to theft, to become compromised, or to be exploited for attacks and other malicious activity. The need for investigation of the aforementioned incidents resulted in the creation of the Mobile Forensics (MF) discipline. MF, a sub-domain of digital forensics, is specialized in extracting and processing evidence from mobile devices in such a way that attacking entities and actions are identified and traced. Beyond its primary research interest on evidence acquisition from mobile devices, MF has recently expanded its scope to encompass the organized and advanced evidence representation and analysis of future malicious entity behavior. Nonetheless, data acquisition still remains its main focus. While the field is under continuous research activity, new concepts such as the involvement of cloud computing in the MF ecosystem and the evolution of enterprise mobile solutions—particularly mobile device management and bring your own device—bring new opportunities and issues to the discipline. The current article presents the research conducted within the MF ecosystem during the last 7 years, identifies the gaps, and highlights the differences from past research directions, and addresses challenges and open issues in the field.

Proceedings ArticleDOI
01 Sep 2018
TL;DR: This paper introduces a fog-based IoT forensic framework (FoBI) that attempts to address the key challenges associated with digital IoT forensics and uses the FoBI framework to provide insights on improving the digital forensics processes involving IoT systems.
Abstract: The increasing number of IoT devices is prompting the need to investigate digital forensic techniques that can be efficiently applied to solve computer-related crimes involving IoT devices. In digital forensics, it is common for forensic investigators to consider computing hardware and operating systems for forensic data acquisition. However, applying current forensic data acquisition techniques for further digital evidence analysis may not be applicable to some IoT devices. It is becoming increasingly challenging to determine what type of data should be collected from IoT devices and how traces from such devices can be leveraged by forensic investigators. In this paper, we introduce a fog-based IoT forensic framework (FoBI) that attempts to address the key challenges associated with digital IoT forensics. Throughout this paper, we discuss the overall architecture, use cases and implementation details of FoBI. We further use our FoBI framework to provide insights on improving the digital forensics processes involving IoT systems.

Journal ArticleDOI
TL;DR: To take advantage of the volume and variety of data captured by and stored in ubiquitous IoT services, forensic investigators need to draw upon evidence-acquisition methods and techniques from all areas of digital forensics and possibly create new IoT-specific investigation processes.
Abstract: The Internet of Things (IoT) brings a set of unique and complex challenges to the field of digital forensics. To take advantage of the volume and variety of data captured by and stored in ubiquitous IoT services, forensic investigators need to draw upon evidence-acquisition methods and techniques from all areas of digital forensics and possibly create new IoT-specific investigation processes. Although a number of conceptual process models have been developed to address the unique characteristics of the IoT, many challenges remain unresolved.

Proceedings ArticleDOI
15 Oct 2018
TL;DR: It is argued that 3D printers possess unique fingerprints, which arise from hardware imperfections during the manufacturing process, causing discrepancies in the line formation of printed physical objects, which result in unique textures that can serve as a viable fingerprint on associated 3D printed products.
Abstract: As 3D printing technology begins to outpace traditional manufacturing, malicious users increasingly have sought to leverage this widely accessible platform to produce unlawful tools for criminal activities. Therefore, it is of paramount importance to identify the origin of unlawful 3D printed products using digital forensics. Traditional countermeasures, including information embedding or watermarking, rely on supervised manufacturing process and are impractical for identifying the origin of 3D printed tools in criminal applications. We argue that 3D printers possess unique fingerprints, which arise from hardware imperfections during the manufacturing process, causing discrepancies in the line formation of printed physical objects. These variations appear repeatedly and result in unique textures that can serve as a viable fingerprint on associated 3D printed products. To address the challenge of traditional forensics in identifying unlawful 3D printed products, we present PrinTracker, the 3D printer identification system, which can precisely trace the physical object to its source 3D printer based on their fingerprint. Results indicate that PrinTracker provides a high accuracy using 14 different 3D printers. Under unfavorable conditions (e.g. restricted sample area, location and process), the PrinTracker can still achieve an acceptable accuracy of 92%. Furthermore, we examine the effectiveness, robustness, reliability and vulnerabilities of the PrinTracker in multiple real-world scenarios.

Journal ArticleDOI
Giannis Tziakouris1
01 Jul 2018
TL;DR: The anonymous and decentralized nature of cryptocurrencies has turned them into a powerful weapon in the cyberarsenal of national and international criminal groups by facilitating their illicit activities while evading prosecution.
Abstract: The anonymous and decentralized nature of cryptocurrencies has turned them into a powerful weapon in the cyberarsenal of national and international criminal groups by facilitating their illicit activities while evading prosecution. However, despite the numerous challenges that the international law enforcement community faces when investigating cryptocurrencies, a number of investigation opportunities do exist.

Journal ArticleDOI
TL;DR: In this article, the types and locations of the artefacts relating to the installation and uninstallation of CloudMe client application, logging in and out, and file synchronization events from the computer desktop and mobile clients are described.
Abstract: Summary The significant increase in the volume, variety, and velocity of data complicates cloud forensic efforts, and such (big) evidential data will, at some point, become too (computationally) expensive to be fully identified, collected, and analysed in a timely manner. Thus, it is important for digital forensic practitioners to have an up-to-date knowledge of relevant data artefacts that could be forensically recovered from the cloud product under investigation. In this paper, CloudMe, a popular cloud storage service, is studied. The types and locations of the artefacts relating to the installation and uninstallation of CloudMe client application, logging in and out, and file synchronization events from the computer desktop and mobile clients are described. Findings from this research will also help inform future development of tools and techniques (e.g., data mining techniques) for cloud-enabled big data endpoint forensics investigation.

Journal ArticleDOI
TL;DR: The Framework for Reliable Experimental Design (FRED) is proposed, designed to be a resource for those operating within the digital forensic field, both in industry and academia, to support and develop research best practice within the discipline.

BookDOI
24 Apr 2018
TL;DR: This book provides readers with up-to-date research of emerging cyber threats and defensive mechanisms, which are timely and essential and presents different attempts in utilizing machine learning and data mining techniques to create threat feeds for a range of consumers.
Abstract: This book provides readers with up-to-date research of emerging cyber threats and defensive mechanisms, which are timely and essential. It covers cyber threat intelligence concepts against a range of threat actors and threat tools (i.e. ransomware) in cutting-edge technologies, i.e., Internet of Things (IoT), Cloud computing and mobile devices. This book also provides the technical information on cyber-threat detection methods required for the researcher and digital forensics experts, in order to build intelligent automated systems to fight against advanced cybercrimes. The ever increasing number of cyber-attacks requires the cyber security and forensic specialists to detect, analyze and defend against the cyber threats in almost real-time, and with such a large number of attacks is not possible without deeply perusing the attack features and taking corresponding intelligent defensive actions – this in essence defines cyber threat intelligence notion. However, such intelligence would not be possible without the aid of artificial intelligence, machine learning and advanced data mining techniques to collect, analyze, and interpret cyber-attack campaigns which is covered in this book. This book will focus on cutting-edge research from both academia and industry, with a particular emphasis on providing wider knowledge of the field, novelty of approaches, combination of tools and so forth to perceive reason, learn and act on a wide range of data collected from different cyber security and forensics solutions. This book introduces the notion of cyber threat intelligence and analytics and presents different attempts in utilizing machine learning and data mining techniques to create threat feeds for a range of consumers. Moreover, this book sheds light on existing and emerging trends in the field which could pave the way for future works. The interdisciplinary nature of this book, makes it suitable for a wide range of audiences with backgrounds in artificial intelligence, cyber security, forensics, big data and data mining, distributed systems and computer networks. This would include industry professionals, advanced-level students and researchers that work within these related fields.

Proceedings ArticleDOI
01 Feb 2018
TL;DR: New insights into drone forensics are presented in terms of accessing the digital containers of an intercepted drone and retrieving all the information that can help digital forensic investigators establish ownership, recover flight data and acquire content of media files.
Abstract: Powerful information acquisition and processing capabilities, coupled with intelligent surveillance and reconnaissance features, have contributed to increased popularity of Unmanned Aerial Vehicles (UAVs), also known as drones. In addition to the numerous beneficial uses, UAVs have been misused to launch illegal and sometimes criminal activities that pose direct threats to individuals, organizations, public safety and national security. Despite its increased importance, "drone forensics" remains a relatively unexplored research topic. This paper presents important results of a forensic investigation analysis performed on a test Parrot AR drone 2.0. We present new insights into drone forensics in terms of accessing the digital containers of an intercepted drone and retrieving all the information that can help digital forensic investigators establish ownership, recover flight data and acquire content of media files.

Proceedings ArticleDOI
27 May 2018
TL;DR: The meaning of forensic readiness of software is considered, forensic readiness requirements are defined, and some of the open software engineering challenges in the face of forensic ready systems are highlighted.
Abstract: As software becomes more ubiquitous, and the risk of cyber-crimes increases, ensuring that software systems are forensic-ready (i.e., capable of supporting potential digital investigations) is critical. However, little or no attention has been given to how well-suited existing software engineering methodologies and practices are for the systematic development of such systems. In this paper, we consider the meaning of forensic readiness of software, define forensic readiness requirements, and highlight some of the open software engineering challenges in the face of forensic readiness. We use a real software system developed to investigate online sharing of child abuse media to illustrate the presented concepts.

Journal ArticleDOI
TL;DR: There still exist no IoT architectures that have a DFR capability that is able to attain incident preparedness across IoT environments as a mechanism of preparing for post-event response process, so an architecture for incorporating DFR to IoT domain for proper planning and preparing in the case of security incidents is proposed.
Abstract: The unique identities of remote sensing, monitoring, self-actuating, self–adapting and self-configuring “things” in Internet of Things (IoT) has come out as fundamental building blocks for the development of “smart environments”. This experience has begun to be felt across different IoT-based domains like healthcare, surveillance, energy systems, home appliances, industrial machines, smart grids and smart cities. These developments have, however, brought about a more complex and heterogeneous environment which is slowly becoming a home to cyber attackers. Digital Forensic Readiness (DFR) though can be employed as a mechanism for maximizing the potential use of digital evidence while minimizing the cost of conducting a digital forensic investigation process in IoT environments in case of an incidence. The problem addressed in this paper, therefore, is that at the time of writing this paper, there still exist no IoT architectures that have a DFR capability that is able to attain incident preparedness across IoT environments as a mechanism of preparing for post-event response process. It is on this premise, that the authors are proposing an architecture for incorporating DFR to IoT domain for proper planning and preparing in the case of security incidents. It is paramount to note that the DFR mechanism in IoT discussed in this paper complies with ISO/IEC 27043: 2015, 27030:2012 and 27017: 2015 international standards. It is the authors’ opinion that the architecture is holistic and very significant in IoT forensics.

Proceedings ArticleDOI
22 Mar 2018
TL;DR: This paper analyzes widely used encrypted Instant Messaging applications namely WeChat, Telegram, Viber and Whatsapp, and shows how these applications store data in the Android file system and discusses forensic implications of the IM applications that are utilizing encryption.
Abstract: Smartphone market is growing day by day and according to Statista, as of 2017, 68.4% of the U.S. population uses smartphones. Similarly, the amount of information stored on these mobile devices is tremendous and ranging from personal details, contacts, applications data, to exchange of texts and media. This information can become a significant evidence during a digital forensics investigation and thereafter in courts. As Android is one of the leading smartphone operating systems worldwide, it is important to have the knowledge of Android forensics. Moreover, chat messaging between the users becoming the most prominent communication medium particularly among the youth. The exponential increase in the interception of chat messages on mobile devices led to implementation of end to end encryption. This is mainly due to the concerns raised on privacy and security of user data on smartphones. In this paper we analyze widely used encrypted Instant Messaging (IM) applications namely WeChat, Telegram, Viber and Whatsapp. We also show how these applications store data in the Android file system. In addition we also discuss forensic implications of the IM applications that are utilizing encryption. Analysis of artifacts collected from these applications is performed using the Android Debugging Bridge (ADB) tool and some other open source tools. Moreover, we also present the challenges faced during the collection of the forensically important artifacts.

Journal ArticleDOI
TL;DR: This paper proposes an approach based on Granular Computing to build multiple time-related views in order to interpret the extracted knowledge concerning the periodic occurrences of events, and adopts Formal Concept Analysis as an algorithm to realize granulations of data.
Abstract: Studying aspects related to the occurrences and co-occurrences of events enables many interesting applications in several domains like Public Safety and Security. In particular, in Digital Forensics, it is useful to construct the timeline of a suspect, reconstructed by analysing social networking applications like Facebook and Twitter. One of the main limitations of the existing data analysis techniques, addressing the above issues, is their ability to work only on a single view on data and, thus, may miss the elicitation of interesting knowledge. This limitation can be overcome by considering more views and applying methods to asses such views, allowing human operators to move from a view to a more suitable one. This paper focuses on temporal aspects of data and proposes an approach based on Granular Computing to build multiple time-related views in order to interpret the extracted knowledge concerning the periodic occurrences of events. The proposed approach adopts Formal Concept Analysis (with time-related attributes) as an algorithm to realize granulations of data and defines a set of Granular Computing measures to interpret the formal concepts, whose extensional parts are formed by co-occurred events, in the lattices constructed by such algorithm. The applicability of the approach is demonstrated by providing a case study concerning a public dataset on forest fires occurred in the Montesinho natural park in Portugal.

Posted ContentDOI
TL;DR: A Blockchain-based Chain of Custody (B-CoC) is proposed to dematerialize the CoC process guaranteeing auditable integrity of the collected evidences and traceability of owners, and a prototype of B- coC based on Ethereum is developed and evaluated.
Abstract: One of the main issues in digital forensics is the management of evidences From the time of evidence collection until the time of their exploitation in a legal court, evidences may be accessed by multiple parties involved in the investigation that take temporary their ownership This process, called Chain of Custody (CoC), must ensure that evidences are not altered during the investigation, despite multiple entities owned them, in order to be admissible in a legal court Currently digital evidences CoC is managed entirely manually with entities involved in the chain required to fill in documents accompanying the evidence In this paper, we propose a Blockchain-based Chain of Custody (B-CoC) to dematerialize the CoC process guaranteeing auditable integrity of the collected evidences and traceability of owners We developed a prototype of B-CoC based on Ethereum and we evaluated its performance

Proceedings ArticleDOI
01 Aug 2018
TL;DR: An integrated framework with acceptable digital forensic techniques that are able to analyse Potential Digital Evidence (PDE) from the IoT-based ecosystem that may be used to prove a fact is proposed.
Abstract: Internet of Things (IoT) is a relatively new wave of technology that is increasingly becoming popular in many different organisations globally In essence, IoT is synonymous to networking technology, which allows individuals to connect to different devices in order to facilitate easy sharing of resources as well as communication However, the one major difference between computer networking technology and IoT is the heterogeneity of data involved and distributed nature of IoT that involves self-actuating devices Furthermore, the heterogeneity and distributed nature of IoT further brings complexity of the IoT ecosystem For this reason, IoT ecosystem presents a big challenge to both Digital Forensic (DF) investigators and Law Enforcement Agencies (LEAs) when trying to implement DF techniques What makes digital forensic investigations even harder in IoT ecosystem is its vastness and the rapidity with which it is expanding globally This paper, thus, proposes an Integrated Digital Forensic Investigation Framework (IDFIF-IoT) for an IoT ecosystem which is an extension of an initially proposed generic Digital Forensic Investigation Framework for Internet of Things (DFIF-IoT) Note that, the main emphasis in this paper is on proposing an integrated framework with acceptable digital forensic techniques that are able to analyse Potential Digital Evidence (PDE) from the IoT-based ecosystem that may be used to prove a fact

Journal ArticleDOI
01 Jul 2018
TL;DR: An examination of the forensic procedures required to identify and reconstruct cached video stream data using both YouTube and Facebook Live as example case studies is provided.
Abstract: With the increased popularity of online video streaming comes the risk of this technology's subsequent abuse. With a number of cases noted in 2017 where individuals have engaged with illegal or policy breaching video content, digital forensics practitioners are often tasked with investigating the subsequent ‘fingerprint’ of such acts. This is often to determine both the content of a stream in question, and, how it has been interacted with, typically from an analysis of data residing on a suspect's local device. This article provides an examination of the forensic procedures required to identify and reconstruct cached video stream data using both YouTube and Facebook Live as example case studies. Stream reconstruction methodologies are offered where results show that where a YouTube and Facebook Live video have been played, buffered video stream data can be reassembled to produce a viewable video clip of content.

Proceedings ArticleDOI
08 Jul 2018
TL;DR: The approach firstly applies the conventional text feature extraction approach in identifying the most significant words in the data set, followed by a fuzzy-rough feature selection approach in reducing the high dimensions of the selected words for fast processing, which addresses the uncertainty of class boundaries.
Abstract: Online child grooming detection has recently attracted intensive research interests from both the machine learning community and digital forensics community due to its great social impact. The existing data-driven approaches usually face the challenges of lack of training data and the uncertainty of classes in terms of the classification or decision boundary. This paper proposes a grooming detection approach in an effort to address such uncertainty based on a data set derived from a publicly available profiling data set. In particular, the approach firstly applies the conventional text feature extraction approach in identifying the most significant words in the data set. This is followed by the application of a fuzzy-rough feature selection approach in reducing the high dimensions of the selected words for fast processing, which at the same time addressing the uncertainty of class boundaries. The experimental results demonstrate the efficiency and efficacy of the proposed approach in detecting child grooming.