scispace - formally typeset
Search or ask a question

Showing papers by "Jana Dittmann published in 2014"


Journal ArticleDOI
TL;DR: A relaxation-labelling-based approach on simulated samples, improved by Feng et al. for conventionally developed latent samples is suggested for high-resolution samples of overlapped latent fingerprints, yielding an enhanced separation algorithm with optimised parameters.
Abstract: Overlapped latent fingerprints occurring at crime scenes challenge forensic investigations, as they cannot be properly processed unless separated. Addressing this, Chen et al. proposed a relaxation-labelling-based approach on simulated samples, improved by Feng et al. for conventionally developed latent ones. As the development of advanced contactless nanometre-range sensing technology keeps broadening the vision of forensics, the authors use a chromatic white light sensor for contactless non-invasive acquisition. This preserves the fingerprints for further investigations and enhances existing separation techniques. Motivated by the trend in dactyloscopy that investigations now not only aim at identifications but also retrieving further context of the fingerprints (e.g. chemical composition, age), a context-based separation approach is suggested for high-resolution samples of overlapped latent fingerprints. The author's conception of context-aware data processing is introduced to analyse the context in this forensic scenario, yielding an enhanced separation algorithm with optimised parameters. Two test sets are generated for evaluation, one consisting of 60 authentic overlapped fingerprints on three substrates and the other of 100 conventionally developed latent samples from the work of Feng et al. An equal error rate of 5.7% is achieved on the first test set, which shows improvement over their previous work, and 17.9% on the second.

11 citations


Proceedings ArticleDOI
27 Mar 2014
TL;DR: This first systematic study includes a four-step attack chain, 14 main attack context properties (to describe the attack) and 15 correspondingly derived detection and anomaly properties for the forensic expert usage.
Abstract: In our paper we investigate the attack based on means of artificial sweat printed fingerprint forgeries at crime scenes by studying the attack chain in general and derive potential context properties from the attack, which help to describe the attack more precisely. Based on the attack chain context properties, potential detection and context anomaly properties are derived and suggested as an enabler for proper detection of such forgeries for forensic experts during forensic investigation and interpretation of traces. It is a first study for the discussion with the community to motivate further work addressing open issues in this domain of crime scene forgery detection. Potential means known from biometric analysis as well as from media forensics are included to enhance forensic trace interpretation. In summary, our first systematic study includes a four-step attack chain, 14 main attack context properties (to describe the attack) and 15 correspondingly derived detection and anomaly properties for the forensic expert usage. A first simulation of the application for two exemplary attacks (A and B) shows, how our defined properties can be used, from two viewpoints: which one of properties has the particular attack and which properties might help to identify the trace as a artificial sweat printed fingerprint forgery during investigation of the trace during forensic interpretation (without prior knowing the existence of the attack).

8 citations


Proceedings ArticleDOI
11 Jun 2014
TL;DR: The evaluation based on 6000 samples indicates that StirTrace is suitable to simulate influence factors resulting into overall 195000 simulated samples and the new feature space enhancement is capable for handling banding, rotation as well as removal of lines and columns and shearing artifacts.
Abstract: Artificial sweat printed fingerprints need to be detected during crime scene investigations of latent fingerprints. Several detection approaches have been suggested on a rather small test set. In this paper we use the findings from StirMark applied to exemplar fingerprints to build a new StirTrace tool for simulating different printer effects and enhancing test sets for benchmarking detection approaches. We show how different influence factors during the printing process and acquisition of the scan sample can be simulated. Furthermore, two new feature classes are suggested to improve detection performance of banding and rotation effects during printing. The results are compared with original existing detection feature space. Our evaluation based on 6000 samples indicates that StirTrace is suitable to simulate influence factors resulting into overall 195000 simulated samples. Furthermore, the original and our extended feature set show resistance towards image manipulations with the exception of scaling (to 50 and 200%) and cropping to 25%. The new feature space enhancement is capable for handling banding, rotation as well as removal of lines and columns and shearing artifacts, while the original feature space performs better for additive noise, median cut and stretching in X-direction.

5 citations


Proceedings ArticleDOI
01 Oct 2014
TL;DR: A degree of persistence measure and a protocol for its computation are introduced, allowing for a flexible extraction of time domain information based on different features and approximation techniques, and an increased separation performance is achieved when using the time domain signal instead of spatial segmentation.
Abstract: In forensic applications, traces are often hard to detect and segment from challenging substrates at crime scenes. In this paper, we propose to use the temporal domain of forensic signals as a novel feature space to provide additional information about a trace. In particular we introduce a degree of persistence measure and a protocol for its computation, allowing for a flexible extraction of time domain information based on different features and approximation techniques. At the example of latent fingerprints on semi-/porous surfaces and a CWL sensor, we show the potential of such approach to achieve an increased performance for the challenge of separating prints from background. Based on 36 earlier introduced spectral texture features, we achieve an increased separation performance (0.01 ≤ Δκ ≤ 0.13, respective 0.6% to 6.7%) when using the time domain signal instead of spatial segmentation. The test set consists of 60 different prints on photographic-, catalogue- and copy paper, acquired in a sequence of ten times. We observe a dependency on the used surface as well as the number of consecutive images and identify the accuracy and reproducibility of the capturing device as the main limitation, proposing additional steps for even higher performances in future work.

4 citations


Proceedings ArticleDOI
TL;DR: The extension of the feature set with semantic features derived from known Gabor filter based exemplar fingerprint enhancement techniques by suggesting an Epsilon-neighborhood of each block in order to achieve an improved accuracy is proposed and first preliminary results support further research into VQIs as contrast enhancement quality indicator for a given feature space.
Abstract: In crime scene forensics latent fingerprints are found on various substrates. Nowadays primarily physical or chemical preprocessing techniques are applied for enhancing the visibility of the fingerprint trace. In order to avoid altering the trace it has been shown that contact-less sensors offer a non-destructive acquisition approach. Here, the exploitation of fingerprint or substrate properties and the utilization of signal processing techniques are an essential requirement to enhance the fingerprint visibility. However, especially the optimal sensory is often substrate-dependent. An enhanced generic pattern recognition based contrast enhancement approach for scans of a chromatic white light sensor is introduced in Hildebrandt et al.1 using statistical, structural and Benford's law2 features for blocks of 50 micron. This approach achieves very good results for latent fingerprints on cooperative, non-textured, smooth substrates. However, on textured and structured substrates the error rates are very high and the approach thus unsuitable for forensic use cases. We propose the extension of the feature set with semantic features derived from known Gabor filter based exemplar fingerprint enhancement techniques by suggesting an Epsilon-neighborhood of each block in order to achieve an improved accuracy (called fingerprint ridge orientation semantics). Furthermore, we use rotation invariant Hu moments as an extension of the structural features and two additional preprocessing methods (separate X- and Y Sobel operators). This results in a 408-dimensional feature space. In our experiments we investigate and report the recognition accuracy for eight substrates, each with ten latent fingerprints: white furniture surface, veneered plywood, brushed stainless steel, aluminum foil, "Golden-Oak" veneer, non-metallic matte car body finish, metallic car body finish and blued metal. In comparison to Hildebrandt et al.,1 our evaluation shows a significant reduction of the error rates by 15.8 percent points on brushed stainless steel using the same classifier. This also allows for a successful biometric matching of 3 of the 8 latent fingerprint samples with the corresponding exemplar fingerprint on this particular substrate. For contrast enhancement analysis of classification results we suggest to use known Visual Quality Indexes (VQI)3 as a contrast enhancement quality indicator and discuss our first preliminary results using the exemplary chosen VQI Edge Similarity Score (ESS),4 showing a tendency that higher image differences between a substrate containing a fingerprint and a substrate with a blank surface correlate with a higher recognition accuracy between a latent fingerprint and an exemplar fingerprint. Those first preliminary results support further research into VQIs as contrast enhancement quality indicator for a given feature space.

3 citations


Proceedings ArticleDOI
12 May 2014
TL;DR: An approach to project different types of communications onto a comparable template is presented and this hierarchical approach for the classification of electronic communications is both exhaustive and expandable.
Abstract: With this paper we aim to support network traffic management and incident management processes. Hence this paper introduces a model to classify different types of internet-based communication and to establish homogenous representations for various forms of internet-based communication. To achieve these aims an approach to project different types of communications onto a comparable template is presented. This hierarchical approach for the classification of electronic communications is both exhaustive (in the sense of considered types of internet-based communication) and expandable (in terms of the level of granularity of the performed communication behaviour modelling as well as the corresponding data modelling).

1 citations


Proceedings ArticleDOI
27 Mar 2014
TL;DR: The Luminance Similarity Score (LSS) performing best in intra-sensor, intra-trace scenarios for fingerprint traces and Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR) and PSNR in the YUV Color Space (PSNRY) equally performing best for synthetic fiber traces is shown as a first result for further discussion.
Abstract: Testability and error rates are fundamental requirements for forensic investigations methods. For digitized forensics (e.g. fingerprint or fiber traces) signal processing and pattern recognition concepts are used. The quality of the acquisition process is sparsely researched. In our article we outline a first approach to address this and to open the discussion about the general challenge. We propose using Visual Quality Indices (VQI) as objective measure to compare scanned (digitized) traces for 3 different 3D contactless sensors (in-tensity + topography data) and 1 sensor with spectrometer-based wavelength output. We propose a fitness matrix containing VQI results for trace data acquired for each sensor and suggest using the number of outliers (non-occupied elements of the main diagonal) as fitness measure. We compare 182 data sets using 4 contactless sensors for 2 consecutive scans of 13 traces (3 fingerprints + 10 fibers, with 78 intensity and 78 topography data sets for the three 3D sensors and 26 for the spectography-based wavelength output). We compare the 2 consecutive scans vice versa by computing VQIs for all 7 outputs (intensity, topography and wavelength output) per trace, yielding 196 (14×14) single comparisons per trace and VQI (in all 356720 comparisons). We research the best performing VQIs to select intra-sensor, intra-trace data from intra- and inter-sensor data, giving a starting point for further inter-sensor investigations. Our research shows the Luminance Similarity Score (LSS) performing best in intra-sensor, intra-trace scenarios for fingerprint traces and Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR) and PSNR in the YUV Color Space (PSNRY) equally performing best for synthetic fiber traces as a first result for further discussion.

1 citations


Proceedings Article
16 Nov 2014
TL;DR: This paper proposes an extension of a similarity verification system with the help of the Paillier cryptosystem for signal processing in the encrypted domain for privacy-preserving biometric authentication and focuses on performance issues with respect to database response time.
Abstract: Nowadays, biometric data are more and more used within authentication processes. These data are often stored in databases. However, these data underlie inherent privacy concerns. Therefore, special attention should be paid for handling of these data. We propose an extension of a similarity verification system with the help of the Paillier cryptosystem. In this paper, we use this system for signal processing in the encrypted domain for privacy-preserving biometric authentication. We adapt a biometric authentication system for enhancing privacy. We focus on performance issues with respect to database response time for our authentication process. Although encryption implicates computational effort, we show that only small computational overhead is required. Furthermore, we evaluate our implementation with respect to performance. However, the concept of verification of encrypted biometric data comes at the cost of increased computational effort in contrast to already available biometric systems. Nevertheless, currently available systems lack privacy enhancing technologies. Our findings emphasize that a focus on privacy in the context of user authentication is available. This solution leads to user-centric applications regarding authentication. As an additional benefit, results using data mining are more difficult to be obtained in the domain of user tracking.

1 citations


Proceedings ArticleDOI
01 Nov 2014
TL;DR: A new approach of determining a more reliable “fitness level” that extends the basic mileage-based estimation using multimodal, complementary sensor information already present in modern cars is proposed.
Abstract: Besides its overall optical impression, the assessment of a car's value and/or condition today is widely being based on its (mile)age as a primary indicator. This is a bad and unreliable concept because mileage alone is no representative indicator for a car's condition (which depends on many more fac-tors) and constitutes a focal point for (frequently successful) attacks. In this paper we propose a new approach of determining a more reliable “fitness level” that extends the basic mileage-based estimation. We illustrate advantages a fitness level estimation would yield for different use cases considering security issues like attack resistance. We realize this approach using multimodal, complementary sensor information already present in modern cars. To open discussion with the community we discuss the potential of a first set of 18 proposed properties which include 9 physical, 4 digital and 5 behavior-based ones. Further, this paper proposes a first concept to evaluate the significance of these properties by discussing their explanatory power, freshness, security and available options to verify their plausibility. To a basic extent, this concept could be applied to existing cars, which is illustrated by a practical analysis of a laboratory setup of a 2008 SUV vehicle and a real 2006 limousine car.

01 Jan 2014
TL;DR: In thisem Beitrag wird ein Konzept und Prototyp eines softwarebasierten Sicherheitsguides vorgestellt, der 6 bis 10 jahrige Grundschulkinder fur potentielle SicherHeitsgefahren im Internet sensibilisieren and ihnen Handlungskompetenzen fur Sicher heitsmechanismen vermitteln soll.
Abstract: Bereits Grundschulkinder nutzen das Internet regelmasig. Dabei sind sie haufig Sicherheitsgefahren ausgesetzt, mit denen sie nicht umgehen konnen. In diesem Beitrag wird ein Konzept und Prototyp eines softwarebasierten Sicherheitsguides vorgestellt, der 6 bis 10 jahrige Grundschulkinder fur potentielle Sicherheitsgefahren im Internet sensibilisieren und ihnen Handlungskompetenzen fur Sicherheitsmechanismen vermitteln soll. Der Prototyp wurde mit einer Nutzerstudie in einer Grundschule mit einer eigenen Methodik evaluiert. Die vermuteten Lerneffekte sind allerdings in Zukunft noch mit weiteren Tests zu belegen. 1 Einfuhrung und Motivation Das Internet nimmt in unserer Gesellschaft einen immer hoheren Stellenwert ein. Internetnutzer sind in samtlichen Altersgruppen zu finden. Diverse Studien belegen, dass vor allem der Anteil der Nutzer im Kindesalter stetig zunimmt. Laut [BR12] surfen ca. 40% der 6 bis 10-Jahrigen mindestens einmal pro Woche im Internet. Dabei sind gerade Grundschulkinder wenig fur Onlinegefahren sensibilisiert, wie verschiedene Studien [BR12][LHGO11][KHFD12] einstimmig belegen. Ursachen fur das Eingehen von Sicherheitsrisiken im Internet von jungen Schulkindern sind u.a. das mangelnde Bewusstsein fur Onlinegefahren und die mangelhafte Fahigkeit, kritische Situationen zu bewaltigen oder abzuwehren [OEC11]. Beim Surfen im Internet konnen Kinder verschiedenen Sicherheitsgefahren ausgesetzt sein. Die haufigsten sicherheitskritischen Online-Aktivitaten von Kindern sind: das Veroffentlichen personlicher Informationen, der Zugriff auf nicht vertrauenswurdige Webseiten oder Inhalte, das Herunterladen von Inhalten nicht vertrauenswurdigerWebseiten und das Chatten mit Fremden. Potentielle sicherheitskritische Konsequenzen konnen beispielsweise der Missbrauch personlicher Informationen durch Fremde, Cybermobbing oder das Installieren von Schadsoftware sein [FSRD13]. Fur die spatere Entwicklung im Umgang mit dem Internet ist es daher forderlich, Kindern schon im fruhen Alter das Bewusstsein uber die Gefahren des Internets zu vermitteln. Weiterhin sollten ihnen Hilfestellungen fur die Handhabung von Sicherheitsmechanismen gegeben werden. Diese Arbeit ist nicht allein von den Eltern oder Lehrern zu bewerkstelligen, die sich oft im Hinblick auf die medienerzieherische Sicherheit im Internet, nicht