scispace - formally typeset
Search or ask a question

Showing papers by "Jana Dittmann published in 2010"


Book ChapterDOI
31 May 2010
TL;DR: Concluding, malicious samples often have special characteristics so existing malware-scanners can effectively be supported and the custom, hypothesis based approach performs better on the chosen setup than the general purpose statistical algorithms.
Abstract: While conventional malware detection approaches increasingly fail, modern heuristic strategies often perform dynamically, which is not possible in many applications due to related effort and the quantity of files. Based on existing work from [1] and [2] we analyse an approach towards statistical malware detection of PE executables. One benefit is its simplicity (evaluating 23 static features with moderate resource constrains), so it might support the application on large file amounts, e.g. for network-operators or a posteriori analyses in archival systems. After identifying promising features and their typical values, a custom hypothesis-based classification model and a statistical classification approach using the WEKA machine learning tool [3] are generated and evaluated. The results of large-scale classifications are compared showing that the custom, hypothesis based approach performs better on the chosen setup than the general purpose statistical algorithms. Concluding, malicious samples often have special characteristics so existing malware-scanners can effectively be supported.

24 citations


Book ChapterDOI
31 May 2010
TL;DR: A new reconstruction approach is introduced, which exploits this vulnerability of a specific Biometric Hash algorithm for handwriting, to generate artificial raw data out of a reference BioHash with an attack corresponding to cipher-text-only attack with side information as system parameters from BioHash.
Abstract: Biometric Hash algorithms, also called BioHash, are mainly designed to ensure template protection to its biometric raw data. To assure reproducibility, BioHash algorithms provide a certain level of robustness against input variability to ensure high reproduction rates by compensating for intra-class variation of the biometric raw data. This concept can be a potential vulnerability. In this paper, we want to reflect such vulnerability of a specific Biometric Hash algorithm for handwriting, which was introduced in [1], consider and discuss possible attempts to exploit these flaws. We introduce a new reconstruction approach, which exploits this vulnerability; to generate artificial raw data out of a reference BioHash. Motivated by work from Cappelli et al. for fingerprint modality in [6] further studied in [3], where such an artificially generated raw data has the property of producing false positive recognitions, although they may not necessarily be visually similar. Our new approach for handwriting is based on genetic algorithms combined with user interaction in using a design vulnerability of the BioHash with an attack corresponding to cipher-text-only attack with side information as system parameters from BioHash. To show the general validity of our concept, in first experiments we evaluate using 60 raw data sets (5 individuals overall) consisting of two different handwritten semantics (arbitrary Symbol and fixed PIN). Experimental results demonstrate that reconstructed raw data produces an EERreconstr. in the range from 30% to 75%, as compared to non-attacked inter-class EERinter-class of 5% to 10% and handwritten PIN semantic can be better reconstructed than the Symbol semantic using this new technique. The security flaws of the Biometric Hash algorithm are pointed out and possible countermeasures are proposed.

14 citations


Proceedings ArticleDOI
TL;DR: A prototypic optical sensing system based on hand movement segmentation in near-infrared image sequences implemented in an Audi A6 Avant, which recognizes the direction of the forearm and hand motion and decides whether driver or front-seat passenger touch a control.
Abstract: Successful user discrimination in a vehicle environment may yield a reduction of the number of switches, thus significantly reducing costs while increasing user convenience. The personalization of individual controls permits conditional passenger enable/driver disable and vice versa options which may yield safety improvement. The authors propose a prototypic optical sensing system based on hand movement segmentation in near-infrared image sequences implemented in an Audi A6 Avant. Analyzing the number of movements in special regions, the system recognizes the direction of the forearm and hand motion and decides whether driver or front-seat passenger touch a control. The experimental evaluation is performed independently for uniformly and non-uniformly illuminated video data as well as for the complete video data set which includes both subsets. The general test results in error rates of up to 14.41% FPR / 16.82% FNR and 17.61% FPR / 14.77% FNR for driver and passenger respectively. Finally, the authors discuss the causes of the most frequently occurring errors as well as the prospects and limitations of optical sensing for user discrimination in passenger compartments.

10 citations


Proceedings ArticleDOI
TL;DR: The results show that this fusion removes content dependability while being capable of achieving similar classification rates (especially for the considered global features) if compared to single classifiers on the three exemplarily tested audio data hiding algorithms.
Abstract: In the paper we extend an existing information fusion based audio steganalysis approach by three different kinds of evaluations: The first evaluation addresses the so far neglected evaluations on sensor level fusion. Our results show that this fusion removes content dependability while being capable of achieving similar classification rates (especially for the considered global features) if compared to single classifiers on the three exemplarily tested audio data hiding algorithms. The second evaluation enhances the observations on fusion from considering only segmental features to combinations of segmental and global features, with the result of a reduction of the required computational complexity for testing by about two magnitudes while maintaining the same degree of accuracy. The third evaluation tries to build a basis for estimating the plausibility of the introduced steganalysis approach by measuring the sensibility of the models used in supervised classification of steganographic material against typical signal modification operations like de-noising or 128kBit/s MP3 encoding. Our results show that for some of the tested classifiers the probability of false alarms rises dramatically after such modifications.

8 citations


Proceedings ArticleDOI
09 Sep 2010
TL;DR: This paper introduces and illustrates a six step procedure for the modelling and verification of watermark communication protocols based on application scenario descriptions and transfers the idea of machine-based verification of the security of communication protocols from cryptography to the domain of digital watermarking based media security protocols.
Abstract: In cryptography it is common to evaluate the security of cryptographic primitives and protocols in a computational model, with an attacker trying to break the primitive or protocol in question. To do so formalisation languages like CASPER or CSP (Communication Sequential Processes) and model checkers like FDR (Failures-Divergences Refinement) are used for automatic or semi-automatic machine-based security verification.Here we transfer the idea of machine-based verification of the security of communication protocols from cryptography to the domain of digital watermarking based media security protocols. To allow for such a mainly automatic verification approach, we introduce and illustrate in this paper a six step procedure for the modelling and verification of watermark communication protocols based on application scenario descriptions.The six steps are: First, a modelling of the used communication network and application scenario (as a task) in XML-structures, second, a path search comparing the network and the task and identifying possible watermarking channels, third, a path selection selecting one watermarking channel from the identified alternatives for the protocol realisation, fourth, an automatic CASPER protocol generation from the selected alternative followed by manual adjustments (if necessary), fifth, the CASPER compilation into CSP and sixth, the protocol security(confidentiality, integrity and authenticity) verification via the FDR model checker.

7 citations


Proceedings ArticleDOI
14 Jun 2010
TL;DR: In this article an existing formalisation methodology for malicious code is used to describe characteristics of the worm Conficker variant C to analyse oncoming threats to modern production engineering systems.
Abstract: In this article an existing formalisation methodology for malicious code [6] is used to describe characteristics of the worm Conficker variant C to analyse oncoming threats to modern production engineering systems. Based on the Conficker worm formalism and a component model of an exemplary production scenario (the automatic chamfering of great gears with an industrial robot) an exemplary methodology is demonstrated to analyse malware threats to component related security aspects and to compare the criticality of different malware instances for a special system. On the basis of this methodology potential threats to the security of software and hardware components of the exemplary production scenario are simulated and could also be used to illustrate security threats and protection concepts with virtual techniques to help software engineers to program secure software. The threats are illustrated by the means of four exemplary scenarios.

6 citations


Proceedings ArticleDOI
14 Jun 2010
TL;DR: The potential of a security warning, which can be presented in time ahead of a traditional safety warning, is described, which would only indicate safety-relevant implications that potentially arise later as an implication of the preceding security incident.
Abstract: In this paper, we present an approach for designing security warnings in vehicles for software based security incidents. With this we pursue the goal of reducing safety relevant component failures, which can be caused by manipulated or malicious software. The basis of our work is a theoretical analysis of the correlation of manipulated software (including malware) in automotive systems with the safety relevant failures of system components. We describe the potential of a security warning, which can be presented in time ahead of a traditional safety warning: The latter would only indicate safety-relevant implications that potentially arise later as an implication of the preceding security incident. In this paper we suggest three exemplary icons for a combined security-safety warning. Combined warning means a warning not at the time of a safety-relevant failure but already in the detection of the security-violation (e.g. manipulated software in the vehicle). An essential precondition is a recognition algorithm for such malicious software, which has been examined in previous research like [3]. Based on theoretical analyses, we introduce an exemplary design for the testing of these warnings in a virtual environment, precisely, in a driving simulator. A couple of factors play a central role in such evaluations, such as: perception, reaction of the driver, interpretation of warnings and security awareness. The results can be interpreted in the context of the fundamental aim: the reduction of accidents by security alerts. They thus serve as a recommended course of action for implementation in future vehicles.

5 citations


Proceedings ArticleDOI
TL;DR: This paper compares the detection performances of the three most commonly used and widely available face detection algorithms to identify whether one of these systems is suitable for use in a vehicle environment with variable and mostly non-uniform illumination conditions, and whether any one face detection system can be sufficient for seat occupancy detection.
Abstract: Vehicle seat occupancy detection systems are designed to prevent the deployment of airbags at unoccupied seats, thus avoiding the considerable cost imposed by the replacement of airbags. Occupancy detection can also improve passenger comfort, e.g. by activating air-conditioning systems. The most promising development perspectives are seen in optical sensing systems which have become cheaper and smaller in recent years. The most plausible way to check the seat occupancy by occupants is the detection of presence and location of heads, or more precisely, faces. This paper compares the detection performances of the three most commonly used and widely available face detection algorithms: Viola- Jones, Kienzle et al. and Nilsson et al. The main objective of this work is to identify whether one of these systems is suitable for use in a vehicle environment with variable and mostly non-uniform illumination conditions, and whether any one face detection system can be sufficient for seat occupancy detection. The evaluation of detection performance is based on a large database comprising 53,928 video frames containing proprietary data collected from 39 persons of both sexes and different ages and body height as well as different objects such as bags and rearward/forward facing child restraint systems.

4 citations


Proceedings ArticleDOI
TL;DR: A scalable security model - i.e. no fixed limitations of usable operations, users and objects - for mainly preserving integrity of objects but also ensuring authenticity is proposed for digital long-term preservation.
Abstract: A continuously growing amount of information of today exists not only in digital form but were actually born-digital. These informations need be preserved as they are part of our cultural and scientific heritage or because of legal requirements. As many of these information are born-digital they have no analog origin, and cannot be preserved by traditional means without losing their original representation. Thus digital long-term preservation becomes continuously important and is tackled by several international and national projects like the US National Digital Information Infrastructure and Preservation Program [1], the German NESTOR project [2] and the EU FP7 SHAMAN Integrated Project [3]. In digital long-term preservation the integrity and authenticity of the preserved information is of great importance and a challenging task considering the requirement to enforce both security aspects over a long time often assumed to be at least 100 years. Therefore in a previous work [4] we showed the general feasibility of the Clark-Wilson security model [5] for digital long-term preservation in combination with a syntactic and semantic verification approach [6] to tackle these issues. In this work we do a more detailed investigation and show exemplarily the influence of the application of such a security model on the use cases and roles of a digital long-term preservation environment. Our goals is a scalable security model - i.e. no fixed limitations of usable operations, users and objects - for mainly preserving integrity of objects but also ensuring authenticity.

4 citations


Proceedings ArticleDOI
TL;DR: The basic idea for the watermark generation and embedding scheme is to combine traditional frequency domain spread spectrum watermarking with psychoacoustic modeling to guarantee transparency and alphabet substitution to improve the robustness.
Abstract: In the paper we present a watermarking scheme developed to meet the specific requirements of audio annotation watermarking robust against DA/AD conversion (watermark detection after playback by loudspeaker and recording with a microphone). Additionally the described approach tries to achieve a comparably low detection complexity, so it could be embedded in the near future in low-end devices (e.g. mobile phones or other portable devices). We assume in the field of annotation watermarking that there is no specific motivation for attackers to the developed scheme. The basic idea for the watermark generation and embedding scheme is to combine traditional frequency domain spread spectrum watermarking with psychoacoustic modeling to guarantee transparency and alphabet substitution to improve the robustness. The synchronization and extraction scheme is designed to be much less computational complex than the embedder. The performance of the scheme is evaluated in the aspects of transparency, robustness, complexity and capacity. The tests reveals that 44% out of 375 tested audio files pass the simulation test for robustness, while the most appropriate category shows even 100% robustness. Additionally the introduced prototype shows an averge transparency of -1.69 in SDG, while at the same time having a capacity satisfactory to the chosen application scenario.

4 citations


01 Jan 2010
TL;DR: A prototype that is designed to produce forensic sound network data recordings using inexpensive hardand software, the Linux Forensic Transparent Bridge (LFTB), and shows its usability in a support case and a malicious activity scenario.
Abstract: In this paper we introduce a prototype that is designed to produce forensic sound network data recordings using inexpensive hardand software, the Linux Forensic Transparent Bridge (LFTB). It supports the investigation of the network communication parameters and the investigation of the payload of network data. The basis for the LFTB is a self-developed model of the forensic process which also addresses forensically relevant data types and considerations for the design of forensic software using software engineering techniques. LFTB gathers forensic evidence to support cases such as malfunctioning hardand software and for investigating malicious activity. In the latter application the stealthy design of the proposed device is beneficial. Experiments as part of a first evaluation show its usability in a support case and a malicious activity scenario. Effects to latency and throughput were tested and limitations for packet recording analysed. A live monitoring scheme warning about potential packet loss endangering evidence has been implemented.

Proceedings ArticleDOI
TL;DR: This paper presents a framework to ensure the security aspects of integrity and authenticity of digital objects especially images from the time of their submission to a digital long-term preservation system up to its latter access and even past this and describes how to detect if the digital object has retained both of its security aspects while at the same allowing changes made to it by migration.
Abstract: Digital long-term preservation has become an important topic not only in the preservation domain, but also due to facilitation by several national and international projects like US National Digital Information Infrastructure and Preservation Program [1], the German NESTOR project [2] and the EU FP7 SHAMAN Integrated Project [3]. The reason for this is that a large part of nowadays produced documents and other goods are digital in nature and even some - called "born-digital" - have no analog master. Thus a great part of our cultural and scientific heritage for the coming generations is digital and needs to be preserved as reliable as it is the case for physical objects even surviving hundreds of years. However, the continuously succession of new hardware and software generations coming in very short intervals compared to the mentioned time spans render digital objects from just some generations ago inaccessible. Thus they need to be migrated on new hardware and into newer formats. At the same time integrity and authenticity of the preserved information is of great importance and needs to be ensured. However this becomes a challenging task considering the long time spans and the necessary migrations which alter the digital object. Therefore in a previous work [4] we introduced a syntactic and semantic verification approach in combination with the Clark-Wilson security model [5]. In this paper we present a framework to ensure the security aspects of integrity and authenticity of digital objects especially images from the time of their submission to a digital long-term preservation system (ingest) up to its latter access and even past this. The framework especially describes how to detect if the digital object has retained both of its security aspects while at the same allowing changes made to it by migration.


Proceedings ArticleDOI
09 Sep 2010
TL;DR: This work proposes a novel approach for matching and preservation of face images based on Adaptive Resonance Theory MAPping (ARTMAP) network and shows that compared to the nearest neighbor rule the presented classification approach has better verification performance and more compact template representation.
Abstract: We propose a novel approach for matching and preservation of face images based on Adaptive Resonance Theory MAPping (ARTMAP) network. ART networks possess incrementally growing structure and provide stable on-line learning, which ensures that all patterns presented to the network, will be learned and compactly stored. Moreover the network's weights will be adapted after each classification. These characteristics are important for successful recognition of an object, which patterns are quite changeable in time. In our implementation called FaceART the network is learned from raw images as well as from eigenfaces decomposition coefficients. In order to compare the error rates of the implemented system to existing academic face recognition systems the XM2VTS database with Lausanne protocol is employed. We show that compared to the nearest neighbor rule the presented classification approach has better verification performance and more compact template representation.