scispace - formally typeset
Open Access

Multi‐Level Sensorfusion and Computer‐Vision Algorithms within Driver Assistance for Avoiding Overtaking‐Accidents

TLDR
The sensory and data-fusion approach to a system which provides overtaking assistance and it is shown that the fusion of both sensor inputs is beneficial.
Abstract
On two-lane rural roads, a large number of overtaking accidents tend to happen. Those cause many serious casualties and fatalities. In many cases, inaccurate assessment of the traffic situation is identified as the major cause. Hence, the development of a driver assistance concept for those scenarios promises a high safety benefit. This paper shows the sensory and data-fusion approach to a system which provides this assistance function. The level of information about the car's environment, which is required for overtaking assistance, depends on the phase of the overtaking maneuver. In early stages, i.e. when the overtaking vehicle is in the situation just before the initial lane change, it is only necessary to get information about oncoming cars in the distance. For late stages in the scenario, i.e. when the overtaking speed is too low, dangerous situations can arise due to the fact that the gap in front of the car to be overtaken cannot be reached any more. In this case, it is necessary to calculate an evasion path, based on the perception of unoccupied space in front of the overtaking car. A fusion of different automotive sensors is proposed in order to cover all parts of the overtaking scenario in the system's perception: Information about independently moving objects in front of the car is gained from a radar-device by exploiting the Doppler shift. Moreover, we employ a CMOS-camera sensor. Different algorithms are run on the camera's video stream: a texture-based free space detector as well as an object detection algorithm. Details of those algorithms are shown in further sections of the paper. The proposed approach fuses object information from raw radar object data and the output of a video based object detection algorithm. As a result of this mid-level fusion, a list of moving objects in the whole range of the targeted field of view is obtained. For the free space part, a typical occupancy grid representation of the front car environment is employed for shorter distances in the field of view. This area is relevant for evasion maneuvers. The grid is filled by the camera free-space detection and is corrected with the known objects from the object- list. Thus, a high-level grid fusion is obtained. In particular, it is shown that the fusion of both sensor inputs is beneficial. First, it is possible to detect oncoming vehicles from a relatively high range with the radar device, whereas secondly, object detection from video frames becomes increasingly difficult for distant cars. In close range, both sensors benefit from the fusion of multiple cues. False positive detections can be filtered out and video object detections allow for an improved estimation of other vehicles' widths. Experimental results on real world data which has been recorded with a typical onboard system will be given in the results section.

read more

Citations
More filters
Journal ArticleDOI

Compact Representation of Dynamic Driving Environments for ADAS by Parametric Free Space and Dynamic Object Maps

TL;DR: A novel parametric representation of general dynamic driving environments that is particularly suitable for near-future Advanced Driver Assistance Systems due to its compactness, inherent consistency between static and dynamic entities, suppression of irrelevant details, as well as its sensor-independent, real-time capable generation.
Journal ArticleDOI

Collision-avoidance systems PRORETA: Situation analysis and intervention control

TL;DR: The PRORETA project as mentioned in this paper is an Industry-University research project with the goal to develop steps towards accident-free driving. But it does not consider vehicles moving in opposite directions performing an overtaking maneuver on rural roads.
Journal ArticleDOI

Bayesian Environment Representation, Prediction, and Criticality Assessment for Driver Assistance Systems

TL;DR: Parametric Free Space maps are introduced, which compactly represent the vehicle environment in form of relevant, drivable free space, and a maneuver-based, long-term trajectory prediction and criticality assessment system is introduced.
Proceedings ArticleDOI

Towards a generic and efficient environment model for ADAS

TL;DR: The influence of data compression on accuracy and resource demand of a grid is analyzed and the transmission bandwidth and the required computational resources can be significantly reduced.
Proceedings ArticleDOI

From grid maps to Parametric Free Space maps — A highly compact, generic environment representation for ADAS

TL;DR: A real-time capable algorithm is presented that obtains the representation by building upon an intermediate occupancy grid map-based environment representation generated from an automotive radar and a stereo camera.
References
More filters
Proceedings ArticleDOI

Histograms of oriented gradients for human detection

TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Proceedings Article

Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data

TL;DR: This work presents iterative parameter estimation algorithms for conditional random fields and compares the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.
Journal ArticleDOI

Robust Real-Time Face Detection

TL;DR: In this paper, a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates is described. But the detection performance is limited to 15 frames per second.
Proceedings ArticleDOI

Robust real-time face detection

TL;DR: A new image representation called the “Integral Image” is introduced which allows the features used by the detector to be computed very quickly and a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions.