scispace - formally typeset
Open AccessProceedings ArticleDOI

VPGNet: Vanishing Point Guided Network for Lane and Road Marking Detection and Recognition

TLDR
In this paper, a unified end-to-end trainable multi-task network that jointly handles lane and road marking detection and recognition that is guided by a vanishing point under adverse weather conditions is proposed.
Abstract
In this paper, we propose a unified end-to-end trainable multi-task network that jointly handles lane and road marking detection and recognition that is guided by a vanishing point under adverse weather conditions. We tackle rainy and low illumination conditions, which have not been extensively studied until now due to clear challenges. For example, images taken under rainy days are subject to low illumination, while wet roads cause light reflection and distort the appearance of lane and road markings. At night, color distortion occurs under limited illumination. As a result, no benchmark dataset exists and only a few developed algorithms work under poor weather conditions. To address this shortcoming, we build up a lane and road marking benchmark which consists of about 20,000 images with 17 lane and road marking classes under four different scenarios: no rain, rain, heavy rain, and night. We train and evaluate several versions of the proposed multi-task network and validate the importance of each task. The resulting approach, VPGNet, can detect and classify lanes and road markings, and predict a vanishing point with a single forward pass. Experimental results show that our approach achieves high accuracy and robustness under various conditions in realtime (20 fps). The benchmark and the VPGNet model will be publicly available

read more

Citations
More filters
Proceedings Article

Gen-LaneNet: A Generalized and Scalable Approach for 3D Lane Detection

TL;DR: Gen-LaneNet as mentioned in this paper proposes a geometry-guided lane anchor representation in a new coordinate frame and applies a specific geometric transformation to directly calculate real 3D lane points from the network output.
Proceedings ArticleDOI

Harmonious Semantic Line Detection via Maximal Weight Clique Selection

TL;DR: In this paper, Dong et al. developed two networks: selection network (S-Net) and harmonization network (H-Net), which can detect harmonious semantic lines effectively and efficiently.
Proceedings ArticleDOI

Robust Lane Detection Using Multiple Features

TL;DR: This work designs a lane model using geometric constraints on lane shape and fit the lane model to the visual cues extracted and improves the robustness of the algorithm by tracking lane markers temporally and presents a multi-feature lane detection algorithm.
Proceedings ArticleDOI

Lane Information Perception Network for HD Maps

TL;DR: In this article, a lane line perception network is proposed to detect lane changes in HD maps, which directly takes the returned image as input and outputs the number of lane lines, as well as the color and type attributes of each lane.
Journal ArticleDOI

Bridging the Gap of Lane Detection Performance Between Different Datasets: Unified Viewpoint Transformation

TL;DR: With the proposed algorithm, a lane detection model trained on one dataset can be effectively applied to datasets with different camera settings in vastly different localities, and achieve better generalization ability compared to the state of the art methods.
References
More filters
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Proceedings ArticleDOI

Histograms of oriented gradients for human detection

TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Book ChapterDOI

Microsoft COCO: Common Objects in Context

TL;DR: A new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding by gathering images of complex everyday scenes containing common objects in their natural context.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Proceedings ArticleDOI

You Only Look Once: Unified, Real-Time Object Detection

TL;DR: Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background, and outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.