scispace - formally typeset
Search or ask a question
Author

Christian Kraetzer

Bio: Christian Kraetzer is an academic researcher from Otto-von-Guericke University Magdeburg. The author has contributed to research in topics: Steganalysis & Digital watermarking. The author has an hindex of 14, co-authored 51 publications receiving 845 citations.


Papers
More filters
Proceedings ArticleDOI
20 Sep 2007
TL;DR: The results show, that for the test set, the used classification techniques and selected steganalysis features, microphones can be better classified than environments.
Abstract: In this paper a first approach for digital media forensics is presented to determine the used microphones and the environments of recorded digital audio samples by using known audio steganalysis features. Our first evaluation is based on a limited exemplary test set of 10 different audio reference signals recorded as mono audio data by four microphones in 10 different rooms with 44.1 kHz sampling rate and 16 bit quantisation. Note that, of course, a generalisation of the results cannot be achieved. Motivated by the syntactical and semantical analysis of information and in particular by known audio steganalysis approaches, a first set of specific features are selected for classification to evaluate, whether this first feature set can support correct classifications. The idea was mainly driven by the existing steganalysis features and the question of applicability within a first and limited test set. In the tests presented in this paper, an inter-device analysis with different device characteristics is performed while intra-device evaluations (identical microphone models of the same manufacturer) are not considered. For classification the data mining tool WEKA with K-means as a clustering and Naive Bayes as a classification technique are applied with the goal to evaluate their classification in regard to the classification accuracy on known audio steganalysis features. Our results show, that for our test set, the used classification techniques and selected steganalysis features, microphones can be better classified than environments. These first tests show promising results but of course are based on a limited test and training set as well a specific test set generation. Therefore additional and enhanced features with different test set generation strategies are necessary to generalise the findings.

154 citations

Book
01 Jan 2008
TL;DR: This paper classified existing revocation strategies and implemented one variant for each, and presented a detailed analysis and pragmatic evaluation of the strategies.
Abstract: In an increasing information-driven society, preserving privacy is essential. Anonymous credentials promise a solution to protect the user’s privacy. However, to ensure accountability, efficient revocation mechanisms are essential. Having classified existing revocation strategies, we implemented one variant for each. In this paper we describe our classification and compare our implementations. Finally, we present a detailed analysis and pragmatic evaluation of the strategies.

126 citations

Book ChapterDOI
03 Sep 2009
TL;DR: This work uses a Fourier coefficient histogram of near-silence segments of the recording as the feature vector and uses machine learning techniques for the classification, to determine the microphone model used to record a given audio sample.
Abstract: Media forensics tries to determine the originating device of a signal. We apply this paradigm to microphone forensics, determining the microphone model used to record a given audio sample. Our approach is to extract a Fourier coefficient histogram of near-silence segments of the recording as the feature vector and to use machine learning techniques for the classification. Our test goals are to determine whether attempting microphone forensics is indeed a sensible approach and which one of the six different classification techniques tested is the most suitable one for that task. The experimental results, achieved using two different FFT window sizes (256 and 2048 frequency coefficients) and nine different thresholds for near-silence detection, show a high accuracy of up to 93.5% correct classifications for the case of 2048 frequency coefficients in a test set of seven microphones classified with linear logistic regression models. This positive tendency motivates further experiments with larger test sets and further studies for microphone identification.

83 citations

Proceedings ArticleDOI
01 Mar 2007
TL;DR: In this article, a Mel-cepstrum-based analysis known from speaker and speech recognition is used to perform a detection of embedded hidden messages in VoIP applications, which can detect information hiding in the field of hidden communication as well as for DRM applications.
Abstract: Steganography and steganalysis in VoIP applications are important research topics as speech data is an appropriate cover to hide messages or comprehensive documents. In our paper we introduce a Mel-cepstrum based analysis known from speaker and speech recognition to perform a detection of embedded hidden messages. In particular we combine known and established audio steganalysis features with the features derived from Melcepstrum based analysis for an investigation on the improvement of the detection performance. Our main focus considers the application environment of VoIP-steganography scenarios. The evaluation of the enhanced feature space is performed for classical steganographic as well as for watermarking algorithms. With this strategy we show how general forensic approaches can detect information hiding techniques in the field of hidden communication as well as for DRM applications. For the later the detection of the presence of a potential watermark in a specific feature space can lead to new attacks or to a better design of the watermarking pattern. Following that the usefulness of Mel-cepstrum domain based features for detection is discussed in detail.

73 citations

Journal ArticleDOI
TL;DR: It is shown that the impact of StirTrace post- processing operations on the biometric quality of morphed face images is negligible, the impact on the forensic quality depends on the type of post-processing, and the new FMF realisation outperforms the previously considered ones.
Abstract: Since its introduction in 2014, the face morphing forgery (FMF) attack has received significant attention from the biometric and media forensic research communities. The attack aims at creating artificially weakened templates which can be successfully matched against multiple persons. If successful, the attack has an immense impact on many biometric authentication scenarios including the application of electronic machine-readable travel document (eMRTD) at automated border control gates. We extend the StirTrace framework for benchmarking FMF attacks by adding five issues: a novel three-fold definition for the quality of morphed images, a novel FMF realisation (combined morphing), a post-processing operation to simulate the digital image format used in eMRTD (passport scaling 15 kB), an automated face recognition system (VGG face descriptor) as additional means for biometric quality assessment and two feature spaces for FMF detection (keypoint features and fusion of keypoint and Benford features) as additional means for forensic quality assessment. We show that the impact of StirTrace post-processing operations on the biometric quality of morphed face images is negligible except for two noise operators and passport scaling 15 kB, the impact on the forensic quality depends on the type of post-processing, and the new FMF realisation outperforms the previously considered ones.

59 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This survey provides a thorough review of techniques for manipulating face images including DeepFake methods, and methods to detect such manipulations, with special attention to the latest generation of DeepFakes.

502 citations

Journal ArticleDOI
TL;DR: The goal of this paper is to provide a comprehensive overview on the work that has been carried out over the last decade in the emerging field of antispoofing, with special attention to the mature and largely deployed face modality.
Abstract: In recent decades, we have witnessed the evolution of biometric technology from the first pioneering works in face and voice recognition to the current state of development wherein a wide spectrum of highly accurate systems may be found, ranging from largely deployed modalities, such as fingerprint, face, or iris, to more marginal ones, such as signature or hand. This path of technological evolution has naturally led to a critical issue that has only started to be addressed recently: the resistance of this rapidly emerging technology to external attacks and, in particular, to spoofing. Spoofing, referred to by the term presentation attack in current standards, is a purely biometric vulnerability that is not shared with other IT security solutions. It refers to the ability to fool a biometric system into recognizing an illegitimate user as a genuine one by means of presenting a synthetic forged version of the original biometric trait to the sensor. The entire biometric community, including researchers, developers, standardizing bodies, and vendors, has thrown itself into the challenging task of proposing and developing efficient protection methods against this threat. The goal of this paper is to provide a comprehensive overview on the work that has been carried out over the last decade in the emerging field of antispoofing, with special attention to the mature and largely deployed face modality. The work covers theories, methodologies, state-of-the-art techniques, and evaluation databases and also aims at providing an outlook into the future of this very active field of research.

366 citations

Journal ArticleDOI
TL;DR: A conceptual categorization and metrics for an evaluation of such methods are presented, followed by a comprehensive survey of relevant publications, and technical considerations and tradeoffs of the surveyed methods are discussed.
Abstract: Recently, researchers found that the intended generalizability of (deep) face recognition systems increases their vulnerability against attacks. In particular, the attacks based on morphed face images pose a severe security risk to face recognition systems. In the last few years, the topic of (face) image morphing and automated morphing attack detection has sparked the interest of several research laboratories working in the field of biometrics and many different approaches have been published. In this paper, a conceptual categorization and metrics for an evaluation of such methods are presented, followed by a comprehensive survey of relevant publications. In addition, technical considerations and tradeoffs of the surveyed methods are discussed along with open issues and challenges in the field.

191 citations

Journal ArticleDOI
TL;DR: A potentially more accurate and reliable methodology for determining fingerprint age based on quantitative kinetic changes to the composition of a fingerprint over time is proposed.

182 citations