scispace - formally typeset
Journal ArticleDOI

Domain Adaptation from Daytime to Nighttime: A Situation-sensitive Vehicle Detection and Traffic Flow Parameter Estimation Framework

Reads0
Chats0
TLDR
A new situation-sensitive method based on Faster R-CNN with Domain Adaptation to improve the vehicle detection at nighttime and a situation- sensitive traffic flow parameter estimation method is developed based on the traffic flow theory.
Abstract
Vehicle detection in traffic surveillance images is an important approach to obtain vehicle data and rich traffic flow parameters. Recently, deep learning based methods have been widely used in vehicle detection with high accuracy and efficiency. However, deep learning based methods require a large number of manually labeled ground truths (bounding box of each vehicle in each image) to train the Convolutional Neural Networks (CNN). In the modern urban surveillance cameras, there are already many manually labeled ground truths in daytime images for training CNN, while there are little or much less manually labeled ground truths in nighttime images. In this paper, we focus on the research to make maximum usage of labeled daytime images (Source Domain) to help the vehicle detection in unlabeled nighttime images (Target Domain). For this purpose, we propose a new situation-sensitive method based on Faster R-CNN with Domain Adaptation (DA) to improve the vehicle detection at nighttime. Furthermore, a situation-sensitive traffic flow parameter estimation method is developed based on the traffic flow theory. We collected a new dataset of 2,200 traffic images (1,200 for daytime and 1,000 for nighttime) of 57,059 vehicles to evaluate the proposed method for the vehicle detection. Another new dataset with three 1,800-frame daytime videos and one 1,800-frame nighttime video of about 260 K vehicles was collected to evaluate and show the estimated traffic flow parameters in different situations. The experimental results show the accuracy and effectiveness of the proposed method.

read more

Citations
More filters
Journal ArticleDOI

When Intelligent Transportation Systems Sensing Meets Edge Computing: Vision and Challenges

TL;DR: A critical part of ITS, i.e., sensing, is focused on, and a review on the recent advances in ITS sensing and EC applications in this field is conducted, and the key challenges inITS sensing and future directions with the integration of edge computing are discussed.
Proceedings ArticleDOI

Bridging the Domain Gap for Multi-Agent Perception

TL;DR: This paper proposes the first lightweight framework to bridge domain gaps for multi-agent perception, which can be a plug-in module for most of the existing systems while maintaining confidentiality.
Journal ArticleDOI

Fast vehicle detection algorithm in traffic scene based on improved SSD

TL;DR: Wang et al. as mentioned in this paper proposed an improved SSD (single shot mutlibox detector) algorithm for the fast detection of vehicles in traffic scenes, which selects MobileNet v2 as the backbone feature extraction network for SSD, which improves the real-time performance of the algorithm.
Journal ArticleDOI

Let There be Light: Improved Traffic Surveillance via Detail Preserving Night-to-Day Transfer

TL;DR: This paper proposes a framework to alleviate the accuracy decline when object detection is taken to adverse conditions by using image translation method, and utilizes Kernel Prediction Network (KPN) based method to refine the nighttime to daytime image translation.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Proceedings ArticleDOI

You Only Look Once: Unified, Real-Time Object Detection

TL;DR: Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background, and outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.
Related Papers (5)