scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Road Conditions and Obstacles Indication and Autonomous Braking System

TL;DR: Road conditions and obstacles indication system (RCOIS) as discussed by the authors is a system that can be added to a vehicle to indicate possible events that can cause sudden deceleration on the road to avoid rear-end collisions and for improving road safety while driving.
Abstract: Road conditions and obstacles indication system is a system that can be added to a vehicle to indicate possible events that can cause sudden deceleration on the road to avoid rear-end collisions and for improving road safety while driving. Normally every vehicle is equipped with a tail light to indicate the brakes, but this indication is dependent on the application of brakes. Due to this, they are not helpful in various cases that contain sudden deceleration. Most of the rear-end collisions occur when the driver of the vehicle is Unable to predict the behavior of the speeding of the preceding vehicle. The tail light is a manual option for this problem, but Road conditions and Obstacle Indication System provides an automatic option. This system contains simple architecture that includes the camera sensors, processing unit, and an indicating device. To prevent rear-end collisions, the indication system helps the following driver to make judgments based on Road conditions and the type of obstacle along with its intensity in front of the preceding driver. This system can use to increase the safety of road traffic by using more relevant signals for indications of obstacles. According to the invention, the indicator comprises an indication of obstacles like potholes, road bumps, and objects appearing in front of the vehicle including its intensity, here intensity is considered as distance between obstacle and vehicle. A significant purpose of the Road conditions and Obstacles Indication System (RCOIS) is to indicate that the preceding driver may slow down or stop the vehicle.
References
More filters
Journal ArticleDOI
TL;DR: A simple and scalable detection algorithm that improves mean average precision (mAP) by more than 50 percent relative to the previous best result on VOC 2012-achieving a mAP of 62.4 percent.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC Challenge datasets, plateaued in the final years of the competition. The best-performing methods were complex ensemble systems that typically combined multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 50 percent relative to the previous best result on VOC 2012—achieving a mAP of 62.4 percent. Our approach combines two ideas: (1) one can apply high-capacity convolutional networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data are scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, boosts performance significantly. Since we combine region proposals with CNNs, we call the resulting model an R-CNN or Region-based Convolutional Network . Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

2,058 citations

Journal ArticleDOI
TL;DR: This paper provides a review of the literature in on-road vision-based vehicle detection, tracking, and behavior understanding, and discusses the nascent branch of intelligent vehicles research concerned with utilizing spatiotemporal measurements, trajectories, and various features to characterize on- road behavior.
Abstract: This paper provides a review of the literature in on-road vision-based vehicle detection, tracking, and behavior understanding. Over the past decade, vision-based surround perception has progressed from its infancy into maturity. We provide a survey of recent works in the literature, placing vision-based vehicle detection in the context of sensor-based on-road surround analysis. We detail advances in vehicle detection, discussing monocular, stereo vision, and active sensor-vision fusion for on-road vehicle detection. We discuss vision-based vehicle tracking in the monocular and stereo-vision domains, analyzing filtering, estimation, and dynamical models. We discuss the nascent branch of intelligent vehicles research concerned with utilizing spatiotemporal measurements, trajectories, and various features to characterize on-road behavior. We provide a discussion on the state of the art, detail common performance metrics and benchmarks, and provide perspective on future research directions in the field.

862 citations

Journal ArticleDOI
TL;DR: A comprehensive survey in a systematic approach about the state-of-the-art on-road vision-based vehicle detection and tracking systems for collision avoidance systems (CASs).
Abstract: Over the past decade, vision-based vehicle detection techniques for road safety improvement have gained an increasing amount of attention. Unfortunately, the techniques suffer from robustness due to huge variability in vehicle shape (particularly for motorcycles), cluttered environment, various illumination conditions, and driving behavior. In this paper, we provide a comprehensive survey in a systematic approach about the state-of-the-art on-road vision-based vehicle detection and tracking systems for collision avoidance systems (CASs). This paper is structured based on a vehicle detection processes starting from sensor selection to vehicle detection and tracking. Techniques in each process/step are reviewed and analyzed individually. Two main contributions in this paper are the following: survey on motorcycle detection techniques and the sensor comparison in terms of cost and range parameters. Finally, the survey provides an optimal choice with a low cost and reliable CAS design in vehicle industries.

354 citations

Journal IssueDOI
TL;DR: The proposed schemes for sensor combination, and more specifically the trainable fusion method, lead to enhanced detection performance and, in addition, maintenance of false-alarms under tolerable values in comparison with single-based classifiers.
Abstract: A perception system for pedestrian detection in urban scenarios using information from a LIDAR and a single camera is presented. Two sensor fusion architectures are described, a centralized and a decentralized one. In the former, the fusion process occurs at the feature level, i.e., features from LIDAR and vision spaces are combined in a single vector for posterior classification using a single classifier. In the latter, two classifiers are employed, one per sensor-feature space, which were offline selected based on information theory and fused by a trainable fusion method applied over the likelihoods provided by the component classifiers. The proposed schemes for sensor combination, and more specifically the trainable fusion method, lead to enhanced detection performance and, in addition, maintenance of false-alarms under tolerable values in comparison with single-based classifiers. Experimental results highlight the performance and effectiveness of the proposed pedestrian detection system and the related sensor data combination strategies. © 2009 Wiley Periodicals, Inc.

162 citations

Proceedings ArticleDOI
14 Jul 2017
TL;DR: A deep learning system using region-based convolutional neural network trained with PASCAL VOC image dataset is developed for the detection and classification of on-road obstacles such as vehicles, pedestrians and animals.
Abstract: On-road obstacle detection and classification is one of the key tasks in the perception system of self-driving vehicles. Since vehicle tracking involves localizationand association of vehicles between frames, detection and classification of vehicles is necessary. Vision-based approaches are popular for this task due to cost-effectiveness and usefulness of appearance information associated with the vision data. In this paper, a deep learning system using region-based convolutional neural network trained with PASCAL VOC image dataset is developed for the detection and classification of on-road obstacles such as vehicles, pedestrians and animals. The implementation of the system on a Titan X GPU achieves a processing frame rate of at least 10 fps for a VGA resolution image frame. This sufficiently high frame rate using a powerful GPU demonstrate the suitability of the system for highway driving of autonomous cars. The detection and classification results on images from KITTI and iRoads, and also Indian roads show the performance of the system invariant to object's shape and view, and different lighting and climatic conditions.

90 citations

Trending Questions (1)
How drivers avoid road obstacles?

Drivers can avoid road obstacles by using a Road Conditions and Obstacles Indication System (RCOIS) that provides automatic signals and warnings about obstacles on the road.