scispace - formally typeset
Search or ask a question

Showing papers by "Jana Dittmann published in 2021"


Proceedings ArticleDOI
17 Aug 2021
TL;DR: In this paper, the authors present a taxonomy of hiding patterns for network steganography, which can be applied to multiple domains of steganization instead of being limited to the network scenario.
Abstract: Steganography embraces several hiding techniques which spawn across multiple domains. However, the related terminology is not unified among the different domains, such as digital media steganography, text steganography, cyber-physical systems steganography, network steganography (network covert channels), local covert channels, and out-of-band covert channels. To cope with this, a prime attempt has been done in 2015, with the introduction of the so-called hiding patterns, which allow to describe hiding techniques in a more abstract manner. Despite significant enhancements, the main limitation of such a taxonomy is that it only considers the case of network steganography. Therefore, this paper reviews both the terminology and the taxonomy of hiding patterns as to make them more general. Specifically, hiding patterns are split into those that describe the embedding and the representation of hidden data within the cover object. As a first research action, we focus on embedding hiding patterns and we show how they can be applied to multiple domains of steganography instead of being limited to the network scenario. Additionally, we exemplify representation patterns using network steganography. Our pattern collection is available under https://patterns.ztt.hs-worms.de.

19 citations


Journal ArticleDOI
TL;DR: In this paper, the authors discuss three sets of hand-crafted features and three different fusion strategies to implement DeepFake detection, and show that their approach shows a similar, if not better, generalization behavior than neural network-based methods in tests performed with different training and test sets.
Abstract: DeepFake detection is a novel task for media forensics and is currently receiving a lot of research attention due to the threat these targeted video manipulations propose to the trust placed in video footage. The current trend in DeepFake detection is the application of neural networks to learn feature spaces that allow them to be distinguished from unmanipulated videos. In this paper, we discuss, with features hand-crafted by domain experts, an alternative to this trend. The main advantage that hand-crafted features have over learned features is their interpretability and the consequences this might have for plausibility validation for decisions made. Here, we discuss three sets of hand-crafted features and three different fusion strategies to implement DeepFake detection. Our tests on three pre-existing reference databases show detection performances that are under comparable test conditions (peak AUC > 0.95) to those of state-of-the-art methods using learned features. Furthermore, our approach shows a similar, if not better, generalization behavior than neural network-based methods in tests performed with different training and test sets. In addition to these pattern recognition considerations, first steps of a projection onto a data-centric examination approach for forensics process modeling are taken to increase the maturity of the present investigation.

9 citations


Proceedings ArticleDOI
17 Jun 2021
TL;DR: In this paper, the authors propose general requirements on synthetic biometric samples (that are also applicable for fingerprint images used in forensic application scenarios) together with formal metrics to validate whether the requirements are fulfilled.
Abstract: Generation of synthetic biometric samples such as, for instance, fingerprint images gains more and more importance especially in view of recent cross-border regulations on security of private data. The reason is that biometric data is designated in recent regulations such as the EU GDPR as a special category of private data, making sharing datasets of biometric samples hardly possible even for research purposes. The usage of fingerprint images in forensic research faces the same challenge. The replacement of real datasets by synthetic datasets is the most advantageous straightforward solution which bears, however, the risk of generating "unrealistic" samples or "unrealistic distributions" of samples which may visually appear realistic. Despite numerous efforts to generate high-quality fingerprints, there is still no common agreement on how to define "high-quality'' and how to validate that generated samples are realistic enough. Here, we propose general requirements on synthetic biometric samples (that are also applicable for fingerprint images used in forensic application scenarios) together with formal metrics to validate whether the requirements are fulfilled. Validation of our proposed requirements enables establishing the quality of a generative model (informed evaluation) or even the quality of a dataset of generated samples (blind evaluation). Moreover, we demonstrate in an example how our proposed evaluation concept can be applied to a comparison of real and synthetic datasets aiming at revealing if the synthetic samples exhibit significantly different properties as compared to real ones.

7 citations



Proceedings ArticleDOI
17 Aug 2021
TL;DR: In this article, a systematic in-depth analysis of covert channels by modification for the Network Time Protocol (NTP) is presented, which results in the identification of 49 covert channels, by applying a covert channel pattern-based taxonomy.
Abstract: Covert channels in network protocols are a technique aiming to hide the very existence of secret communication in computer networks. In this work we present a systematic in-depth analysis of covert channels by modification for the Network Time Protocol (NTP). Our analysis results in the identification of 49 covert channels, by applying a covert channel pattern-based taxonomy. The summary and comparison based on nine selected key attributes show that NTP is a plausible carrier for covert channels. The analysis results are evaluated in regards to common behavior of NTP implementations in six major operating systems. Two channels are selected and implemented to be evaluated in network test-beds. By hiding encrypted high entropy data in a high entropy field of NTP we show in our first assessment that practically undetectable channels can be implemented in NTP, motivating the required further research. In our evaluation, we analyze 40,000 NTP server responses from public NTP server providers. We discuss the general approach of the research community that detection of covert channels is the more promising countermeasure, compared to active suppression of covert channels. Therefore, normalization approaches and a secure network environment are introduced.

6 citations


Proceedings ArticleDOI
17 Jun 2021
TL;DR: In this article, the authors present an information hiding approach for exfiltrating sensible information of industrial control systems (ICS) by leveraging the long-term storage of process data in historian databases.
Abstract: In this paper, we present an Information Hiding approach that would be suitable for exfiltrating sensible information of Industrial Control Systems (ICS) by leveraging the long-term storage of process data in historian databases. We show how hidden messages can be embedded in sensor measurements as well as retrieved asynchronously by accessing the historian. We evaluate this approach at the example of water-flow and water-level sensors of the Secure Water Treatment (SWAT) dataset from iTrust. To generalize from specific cover channels (sensors and their transmitted data), we reflect upon general challenges that arise in such Information Hiding scenarios creating network covert channels and discuss aspects of cover channel selection and and sender receiver synchronisation as well as temporal aspects such as the potential persistence of hidden messages in Cyber Physical Systems (CPS). For an empirical evaluation we design and implement a covert channel that makes use of different embedding strategies to perform an adaptive approach in regards to the noise in sensor measurements, resulting in dynamic capacity and bandwidth selection to reduce detection probability. The results of this evaluation show that, using such methods, the exfiltration of sensible information in long-term scaled attacks would indeed be possible. Additionally, we present two detection approaches for the introduced hidden channel and carry out an extensive evaluation of our detectors with multiple test data sets and different parameters. We determine a detection accuracy of up to 87.8% on test data at a false positive rate (FPR) of 0%.

6 citations


Proceedings ArticleDOI
17 Jun 2021
TL;DR: In this paper, the authors present a systematic approach to investigate whether and how events can be identified and extracted during the use of video conferencing software using a forensic process model and the fission of network data streams before applying methods on the specific individual data types.
Abstract: Our paper presents a systematic approach to investigate whether and how events can be identified and extracted during the use of video conferencing software. Our approach is based on the encrypted meta and multimedia data exchanged during video conference sessions. It relies on the network data stream which contains data interpretable without decryption (plain data) and encrypted data (encrypted content) some of which is decrypted using our approach (decrypted content). This systematic approach uses a forensic process model and the fission of network data streams before applying methods on the specific individual data types. Our approach is applied exemplary to the Zoom Videoconferencing Service with Client Version 5.4.57862.0110 [4], the mobile Android App Client Version 5.5.2 (1328) [4], the webbased client and the servers (accessed between Jan 21st and Feb 4th). The investigation includes over 50 different configurations. For the heuristic speaker identification, two series of nine sets for eight different speakers are collected. The results show that various user data can be derived from characteristics of encrypted media streams, even if end-to-end encryption is used. The findings suggest user privacy risks. Our approach offers the identification of various events, which enable activity tracking (e.g. camera on/off, increased activity in front of camera) by evaluating heuristic features of the network streams. Further research into user identification within the encrypted audio stream based on pattern recognition using heuristic features of the corresponding network data stream is conducted and suggests the possibility to identify users within a specific set.

3 citations


Proceedings ArticleDOI
17 Aug 2021
TL;DR: In this article, a semi-automated traffic analysis approach is proposed to examine data flows and data exchanged among parties in online payments, and identify potential impacts of data flows on customers' security, and privacy during online payments.
Abstract: The paper discusses means to identify potential impacts of data flows on customers’ security, and privacy during online payments. The main objectives of our research are looking into the evolution of cybercrime new trends of online payments and detection, more precisely the usage of mobile phones, and describing methodologies for digital trace identification in data flows for potential online payment fraud. The paper aims to identify potential actions for identity theft while conducting the Reconnaissance step of the kill chain, and documenting a forensic methodology for guidance and further data collection for law enforcement bodies. Moreover, a secondary objective of the paper is to identify, from a user’s perspective, transparency issues of data sharing among involved parties for online payments. We thus declare the transparency analysis as the incident triggering a forensic examination. Hence, we devise a semi-automated traffic analysis approach, based on previous work, to examine data flows, and data exchanged among parties in online payments. For this, the main steps are segmenting traffic generated by the process payment, and other sources, subsequently, identifying data streams in the process. We conduct three tests which include three different payment gateways: PayPal, Klarna-sofort, and Amazon Pay. The experiment setup requires circumventing TLS encryption for the correct identification of forensic data types in TCP/IP traffic, and potential data leaks. However, it requires no extensive expertise in mobile security for its installation. In the results, we identified some important security vulnerabilities from some payment APIs that pose financial and privacy risks to the marketplace’s customers.

2 citations


Journal ArticleDOI
TL;DR: In this article, the authors explain the reasons for reluctance to accept such a potentially very beneficial technique and illustrate the practical issues arising when applying fusion, i.e., a potentially negative impact on the classification accuracy, if wrongly used or parameterized, as well as the increased complexity and the inherently higher costs for plausibility validation, of fusion is in conflict with the fundamental requirements for forensics.
Abstract: Information fusion, i.e., the combination of expert systems, has a huge potential to improve the accuracy of pattern recognition systems. During the last decades, various application fields started to use different fusion concepts extensively. The forensic sciences are still hesitant if it comes to blindly applying information fusion. Here, a potentially negative impact on the classification accuracy, if wrongly used or parameterized, as well as the increased complexity (and the inherently higher costs for plausibility validation) of fusion is in conflict with the fundamental requirements for forensics. The goals of this paper are to explain the reasons for this reluctance to accept such a potentially very beneficial technique and to illustrate the practical issues arising when applying fusion. For those practical discussions the exemplary application scenario of morphing attack detection (MAD) is selected with the goal to facilitate the understanding between the media forensics community and forensic practitioners. As general contributions, it is illustrated why the naive assumption that fusion would make the detection more reliable can fail in practice, i.e., why fusion behaves in a field application sometimes differently than in the lab. As a result, the constraints and limitations of the application of fusion are discussed and its impact to (media) forensics is reflected upon. As technical contributions, the current state of the art of MAD is expanded by:

2 citations


Posted ContentDOI
TL;DR: In this article, the authors present a taxonomy of hiding patterns that can be applied to multiple domains of steganography instead of being limited to the network scenario, and exemplify representation patterns using network steganographies.
Abstract: Steganography embraces several hiding techniques which spawn across multiple domains. However, the related terminology is not unified among the different domains, such as digital media steganography, text steganography, cyber-physical systems steganography, network steganography (network covert channels), local covert channels, and out-of-band covert channels. To cope with this, a prime attempt has been done in 2015, with the introduction of the so-called hiding patterns, which allow to describe hiding techniques in a more abstract manner. Despite significant enhancements, the main limitation of such a taxonomy is that it only considers the case of network steganography. Therefore, this paper reviews both the terminology and the taxonomy of hiding patterns as to make them more general. Specifically, hiding patterns are split into those that describe the embedding and the representation of hidden data within the cover object. As a first research action, we focus on embedding hiding patterns and we show how they can be applied to multiple domains of steganography instead of being limited to the network scenario. Additionally, we exemplify representation patterns using network steganography. Our pattern collection is available under this https URL.