scispace - formally typeset
Search or ask a question
Author

B.D. Mysliwetz

Bio: B.D. Mysliwetz is an academic researcher. The author has contributed to research in topics: Curvature. The author has an hindex of 1, co-authored 1 publications receiving 632 citations.
Topics: Curvature

Papers
More filters
Journal ArticleDOI
TL;DR: The general problem of recognizing both horizontal and vertical road curvature parameters while driving along the road has been solved recursively and a differential geometry representation decoupled for the two curvature components has been selected.
Abstract: The general problem of recognizing both horizontal and vertical road curvature parameters while driving along the road has been solved recursively. A differential geometry representation decoupled for the two curvature components has been selected. Based on the planar solution of E.D. Dickmanns and A. Zapp (1986) and its refinements, a simple spatio-temporal model of the driving process makes it possible to take both spatial and temporal constraints into account effectively. The estimation process determines nine road and vehicle state parameters recursively at 25 Hz (40 ms) using four Intel 80286 and one 386 microprocessors. Results with the test vehicle (VaMoRs), which is a 5-ton van, are given for a hilly country road. >

648 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The developments of the last 20 years in the area of vision for mobile robot navigation are surveyed and the cases of navigation using optical flows, using methods from the appearance-based paradigm, and by recognition of specific objects in the environment are discussed.
Abstract: Surveys the developments of the last 20 years in the area of vision for mobile robot navigation. Two major components of the paper deal with indoor navigation and outdoor navigation. For each component, we have further subdivided our treatment of the subject on the basis of structured and unstructured environments. For indoor robots in structured environments, we have dealt separately with the cases of geometrical and topological models of space. For unstructured environments, we have discussed the cases of navigation using optical flows, using methods from the appearance-based paradigm, and by recognition of specific objects in the environment.

1,386 citations

Journal ArticleDOI
TL;DR: The generic obstacle and lane detection system (GOLD), a stereo vision-based hardware and software architecture to be used on moving vehicles to increment road safety, allows to detect both generic obstacles and the lane position in a structured environment at a rate of 10 Hz.
Abstract: This paper describes the generic obstacle and lane detection system (GOLD), a stereo vision-based hardware and software architecture to be used on moving vehicles to increment road safety. Based on a full-custom massively parallel hardware, it allows to detect both generic obstacles (without constraints on symmetry or shape) and the lane position in a structured environment (with painted lane markings) at a rate of 10 Hz. Thanks to a geometrical transform supported by a specific hardware module, the perspective effect is removed from both left and right stereo images; the left is used to detect lane markings with a series of morphological filters, while both remapped stereo images are used for the detection of free-space in front of the vehicle. The output of the processing is displayed on both an on-board monitor and a control-panel to give visual feedbacks to the driver. The system was tested on the mobile laboratory (MOB-LAB) experimental land vehicle, which was driven for more than 3000 km along extra-urban roads and freeways at speeds up to 80 km/h, and demonstrated its robustness with respect to shadows and changing illumination conditions, different road textures, and vehicle movement.

1,088 citations

Journal ArticleDOI
TL;DR: A comparison of a wide variety of methods, pointing out the similarities and differences between methods as well as when and where various methods are most useful, is presented.
Abstract: Driver-assistance systems that monitor driver intent, warn drivers of lane departures, or assist in vehicle guidance are all being actively considered. It is therefore important to take a critical look at key aspects of these systems, one of which is lane-position tracking. It is for these driver-assistance objectives that motivate the development of the novel "video-based lane estimation and tracking" (VioLET) system. The system is designed using steerable filters for robust and accurate lane-marking detection. Steerable filters provide an efficient method for detecting circular-reflector markings, solid-line markings, and segmented-line markings under varying lighting and road conditions. They help in providing robustness to complex shadowing, lighting changes from overpasses and tunnels, and road-surface variations. They are efficient for lane-marking extraction because by computing only three separable convolutions, we can extract a wide variety of lane markings. Curvature detection is made more robust by incorporating both visual cues (lane markings and lane texture) and vehicle-state information. The experiment design and evaluation of the VioLET system is shown using multiple quantitative metrics over a wide variety of test conditions on a large test path using a unique instrumented vehicle. A justification for the choice of metrics based on a previous study with human-factors applications as well as extensive ground-truth testing from different times of day, road conditions, weather, and driving scenarios is also presented. In order to design the VioLET system, an up-to-date and comprehensive analysis of the current state of the art in lane-detection research was first performed. In doing so, a comparison of a wide variety of methods, pointing out the similarities and differences between methods as well as when and where various methods are most useful, is presented

1,056 citations

Journal ArticleDOI
24 Oct 2014
TL;DR: This contribution provides a review of fundamental goals, development and future perspectives of driver assistance systems, and examines the progress incented by the use of exteroceptive sensors such as radar, video, or lidar in automated driving in urban traffic and in cooperative driving.
Abstract: This contribution provides a review of fundamental goals, development and future perspectives of driver assistance systems. Mobility is a fundamental desire of mankind. Virtually any society strives for safe and efficient mobility at low ecological and economic costs. Nevertheless, its technical implementation significantly differs among societies, depending on their culture and their degree of industrialization. A potential evolutionary roadmap for driver assistance systems is discussed. Emerging from systems based on proprioceptive sensors, such as ABS or ESC, we review the progress incented by the use of exteroceptive sensors such as radar, video, or lidar. While the ultimate goal of automated and cooperative traffic still remains a vision of the future, intermediate steps towards that aim can be realized through systems that mitigate or avoid collisions in selected driving situations. Research extends the state-of-the-art in automated driving in urban traffic and in cooperative driving, the latter addressing communication and collaboration between different vehicles, as well as cooperative vehicle operation by its driver and its machine intelligence. These steps are considered important for the interim period, until reliable unsupervised automated driving for all conceivable traffic situations becomes available. The prospective evolution of driver assistance systems will be stimulated by several technological, societal and market trends. The paper closes with a view on current research fields.

716 citations

Journal ArticleDOI
TL;DR: In this article, the authors systematically summarize methodologies and discuss challenges for deep multi-modal object detection and semantic segmentation in autonomous driving and provide an overview of on-board sensors on test vehicles, open datasets, and background information for object detection.
Abstract: Recent advancements in perception for autonomous driving are driven by deep learning. In order to achieve robust and accurate scene understanding, autonomous vehicles are usually equipped with different sensors (e.g. cameras, LiDARs, Radars), and multiple sensing modalities can be fused to exploit their complementary properties. In this context, many methods have been proposed for deep multi-modal perception problems. However, there is no general guideline for network architecture design, and questions of “what to fuse”, “when to fuse”, and “how to fuse” remain open. This review paper attempts to systematically summarize methodologies and discuss challenges for deep multi-modal object detection and semantic segmentation in autonomous driving. To this end, we first provide an overview of on-board sensors on test vehicles, open datasets, and background information for object detection and semantic segmentation in autonomous driving research. We then summarize the fusion methodologies and discuss challenges and open questions. In the appendix, we provide tables that summarize topics and methods. We also provide an interactive online platform to navigate each reference: https://boschresearch.github.io/multimodalperception/ .

674 citations