Author
Jan Thomanek
Other affiliations: IAV
Bio: Jan Thomanek is an academic researcher from Chemnitz University of Technology. The author has contributed to research in topics: Pedestrian detection & Pixel. The author has an hindex of 4, co-authored 6 publications receiving 32 citations. Previous affiliations of Jan Thomanek include IAV.
Papers
More filters
06 Dec 2011
TL;DR: Three different fusion techniques are proposed to combine the advantages of two vision sensors -- a far-infrared (FIR) and a visible light camera and the results of the pedestrian classification are compared.
Abstract: Pedestrian detection is an important field in computer vision with applications in surveillance, robotics and driver assistance systems. The quality of such systems can be improved by the simultaneous use of different sensors. This paper proposes three different fusion techniques to combine the advantages of two vision sensors -- a far-infrared (FIR) and a visible light camera. Different fusion methods taken from various levels of information representation are briefly described and finally compared regarding the results of the pedestrian classification.
15 citations
01 Jan 2013
TL;DR: The prototyping road side unit (RSU) PROTECT1 from Chemnitz University of Technology is presented as a generic intelligent transportation system (ITS) with a variety of sensors, computing units, and different human–machine interfaces (HMIs) to allow the fast implementation and evaluation of new ADASs and safety applications for vulnerable road users (VRUs).
Abstract: The steadily increasing complexity of cooperative Advanced Driver Assistance Systems (ADASs) requires efficient development and prototyping strategies for new ADAS applications. In this paper the prototyping road side unit (RSU) PROTECT1 from Chemnitz University of Technology is presented as a generic intelligent transportation system (ITS) with a variety of sensors, computing units, and different human–machine interfaces (HMIs) in order to allow the fast implementation and evaluation of new ADASs and safety applications for vulnerable road users (VRUs). It is an integral part of the rapid prototyping framework for ADAS applications of the Professorship for Communications Engineering—which furthermore consists of two multisensory equipped vehicles CARAI 1/2 and the modular software prototyping framework BASELABS Suite.
6 citations
17 Nov 2009
TL;DR: Experimental results show that the detection performance based on a fused image sequence outperforms a detector that is based on just a single sensor.
Abstract: This contribution presents an approach how to improve the classifier performance of an existing pedestrian detection system by using pixel-based data fusion of FIR and NIR sensors. The advantage of the proposed method is that the fused images are more suitable for the subsequent feature extraction. Both, the algorithm of the pedestrian detection system and the used pixel-based fusion techniques, are presented. Experimental results show that the detection performance based on a fused image sequence outperforms a detector that is based on just a single sensor.
6 citations
Proceedings Article•
07 Oct 2014TL;DR: An early fusion technique to combine the advantages of two or more vision sensors creates a single composite image that will be more comprehensive for further computer vision tasks, e.g. Object Detection.
Abstract: Modern cooperative Advanced Driver Assistance Systems (ADASs) require efficient algorithms and methods for the real-time processing of all sensor data. In particular, systems for combination of different sensors are gaining increasing importance. This paper proposes an early fusion technique to combine the advantages of two or more vision sensors. It creates a single composite image that will be more comprehensive for further computer vision tasks, e.g. Object Detection. After image registration, the presented pixel-based fusion framework transforms the registered sensor images into a common representational format by a multiscale decomposition. A denoising and a multiscale edge detection are applied on the transformed data. Only data with a high Activity Level are considered for the fusion process that based on a probabilistic approach. Finally, the fused image can be the input for a subsequent feature extraction task. The proposed fusion technique is examined on a Pedestrian Detection System based on an infrared and a visible light camera.
4 citations
06 Dec 2010
TL;DR: The presented pixel-based fusion technique is examined on the images of two sensors, a far-infrared (FIR) light camera and a visible light camera which are built-in a vehicle.
Abstract: The proposed technique addresses a fusion method of two imaging sensors on pixel-level. The fused image will provide a scene representation which is robust against illumination changes and different weather conditions. Thus, the combination of the advantages of each camera will extend the capabilities for many computer vision applications, such as video surveillance and automatic object recognition. The presented pixel-based fusion technique is examined on the images of two sensors, a far-infrared (FIR) light camera and a visible light camera which are built-in a vehicle. The sensor images are first decomposed using the Dyadic Wavelet Transform. The transformed data are combined in the wavelet domain controlled by a “goal-oriented” fusion rule. Finally, the fused wavelet representation image will be processed by a pedestrian detection system.
3 citations
Cited by
More filters
TL;DR: Different issues related to pedestrians as road users are discussed and a comprehensive survey and classification of the different solutions of pedestrian protection are provided.
Abstract: The increased urbanization and the drive for realizing smart cities have motivated extensive research and development in the realm of Intelligent Transportation Systems (ITS) in order to deal with the increased traffic intensity. Yet, the scope of ITS is not limited to vehicles, as pedestrians are special road users that play an important role in affecting traffic, road infrastructure, and vehicle design. Pedestrians are deemed to be the most vulnerable road users and are the major sufferers of road-incident fatalities and injuries each year. Therefore, quite a few studies have focused on pedestrian’s support and safety. On the other hand, due to imperceptible behavior, a pedestrian may also negatively impact traffic efficiency and can thus be viewed as an obstacle for fully realizing the advantages of ITS. In addition, more issues related to pedestrians have been raised with the emergence of Autonomous Vehicles (AVs). In this paper, we discuss different issues related to pedestrians as road users. Furthermore, we provide a comprehensive survey and classification of the different solutions of pedestrian protection. Finally, we highlight technical gaps and point out possible future research directions.
34 citations
TL;DR: A learning-based method for visible and thermal image fusion that focuses on generating fused images with high visual similarity to regular truecolor (red-green-blue or RGB) images, while introducing new informative details in pedestrian regions is proposed.
Abstract: Reliable vision in challenging illumination conditions is one of the crucial requirements of future autonomous automotive systems. In the last decade, thermal cameras have become more easily accessible to a larger number of researchers. This has resulted in numerous studies which confirmed the benefits of the thermal cameras in limited visibility conditions. In this paper, we propose a learning-based method for visible and thermal image fusion that focuses on generating fused images with high visual similarity to regular truecolor (red-green-blue or RGB) images, while introducing new informative details in pedestrian regions. The goal is to create natural, intuitive images that would be more informative than a regular RGB camera to a human driver in challenging visibility conditions. The main novelty of this paper is the idea to rely on two types of objective functions for optimization: a similarity metric between the RGB input and the fused output to achieve natural image appearance; and an auxiliary pedestrian detection error to help defining relevant features of the human appearance and blending them into the output. We train a convolutional neural network using image samples from variable conditions (day and night) so that the network learns the appearance of humans in the different modalities and creates more robust results applicable in realistic situations. Our experiments show that the visibility of pedestrians is noticeably improved especially in dark regions and at night. Compared to existing methods we can better learn context and define fusion rules that focus on the pedestrian appearance, while that is not guaranteed with methods that focus on low-level image quality metrics.
34 citations
06 Dec 2011
TL;DR: Three different fusion techniques are proposed to combine the advantages of two vision sensors -- a far-infrared (FIR) and a visible light camera and the results of the pedestrian classification are compared.
Abstract: Pedestrian detection is an important field in computer vision with applications in surveillance, robotics and driver assistance systems. The quality of such systems can be improved by the simultaneous use of different sensors. This paper proposes three different fusion techniques to combine the advantages of two vision sensors -- a far-infrared (FIR) and a visible light camera. Different fusion methods taken from various levels of information representation are briefly described and finally compared regarding the results of the pedestrian classification.
15 citations
18 Jun 2014
TL;DR: An integrated probabilistic approach which preforms fault detection & exclusion, localization and multi-sensor data fusion within one unified Bayesian framework is proposed to provide a reliable vehicle positioning concept which can be used in urban areas without the aforementioned limitations.
Abstract: Nowadays, satellite-based localization is a well-established technical solution to support several navigation tasks in daily life. Besides the application inside of portable devices, satellite-based positioning is used for in-vehicle navigation systems as well. Moreover, due to its global coverage and the availability of inexpensive receiver hardware it is an appealing technology for numerous applications in the area of Intelligent Transportation Systems (ITSs). However, it has to be admitted that most of the aforementioned examples either rely on modest accuracy requirements or are not sensitive to temporary integrity violations. Although technical concepts of Advanced Driver Assistance Systems (ADASs) based on Global Navigation Satellite Systems (GNSSs) have been successfully demonstrated under open sky conditions, practice reveals that such systems suffer from degraded satellite signal quality when put into urban areas. Thus, the main research objective of this thesis is to provide a reliable vehicle positioning concept which can be used in urban areas without the aforementioned limitations. Therefore, an integrated probabilistic approach which preforms fault detection & exclusion, localization and multi-sensor data fusion within one unified Bayesian framework is proposed. From an algorithmic perspective, the presented concept is based on a probabilistic data association technique with explicit handling of outlier measurements as present in urban areas. By that approach, the accuracy, integrity and availability are improved at the same time, that is, a consistent positioning solution is provided. In addition, a comprehensive and in-depth analysis of typical errors in urban areas within the pseudorange domain is performed. Based on this analysis, probabilistic models are proposed and later on used to facilitate the positioning algorithm. Moreover, the presented concept clearly targets towards mass-market applications based on low-cost receivers and hence aims to replace costly sensors by smart algorithms. The benefits of these theoretical contributions are implemented and demonstrated on the example of a real-time vehicle positioning prototype as used inside of the European research project GAlileo Interactive driviNg (GAIN). This work describes all necessary parts of this system including GNSS signal processing, fault detection and multi-sensor data fusion within one processing chain. Finally, the performance and benefits of the proposed concept are examined and validated both with simulated and comprehensive real-world sensor data from numerous test drives.
12 citations
21 Sep 2016
TL;DR: The new robust Thermo-Visible moving object detection system under different scenarios such as camouflaged, glass, snow, similar object and background color or temperature etc is described.
Abstract: This paper describes the new robust Thermo-Visible moving object detection system under different scenarios such as camouflaged, glass, snow, similar object and background color or temperature etc. Background subtraction is performed separately in Thermal infrared and Visible spectrum imaging modality by formation of mean background frame and use of global thresholding,then using connected component theory and fusion rule, moving objects are detected and blob based tracked in adverse situations also.
12 citations