scispace - formally typeset
Search or ask a question
Book Chapter•DOI•

Real-Time Vehicle Detection in Aerial Images Using Skip-Connected Convolution Network with Region Proposal Networks

TL;DR: This paper aims to provide a solution to the problem faced in real-time vehicle detection in aerial images and videos by using hyper maps generated by skip connected Convolutional network to generate object like proposals accurately.
Abstract: Detection of objects in aerial images has gained significant attention in recent years, due to its extensive needs in civilian and military reconnaissance and surveillance applications. With the advent of Unmanned Aerial Vehicles (UAV), the scope of performing such surveillance task has increased. The small size of the objects in aerial images makes it very difficult to detect them. Two-stage Region based Convolutional Neural Network framework for object detection has been proved quite effective. The main problem with these frameworks is the low speed as compared to the one class object detectors due to the computation complexity in generating the region proposals. Region-based methods suffer from poor localization of the objects that leads to a significant number of false positives. This paper aims to provide a solution to the problem faced in real-time vehicle detection in aerial images and videos. The proposed approach used hyper maps generated by skip connected Convolutional network. The hyper feature maps are then passed through region proposal network to generate object like proposals accurately. The issue of detecting objects similar to background is addressed by modifying the loss function of the proposal network. The performance of the proposed network has been evaluated on the publicly available VEDAI dataset.
Citations
More filters
Journal Article•DOI•
TL;DR: In this article , the authors provide a review on vehicle detection from UAV imagery using deep learning techniques, including convolutional neural networks, recurrent neural network, autoencoders, generative adversarial networks, and their contribution to improve the vehicle detection task.
Abstract: Vehicle detection from unmanned aerial vehicle (UAV) imagery is one of the most important tasks in a large number of computer vision-based applications. This crucial task needed to be done with high accuracy and speed. However, it is a very challenging task due to many characteristics related to the aerial images and the used hardware, such as different vehicle sizes, orientations, types, density, limited datasets, and inference speed. In recent years, many classical and deep-learning-based methods have been proposed in the literature to address these problems. Handed engineering- and shallow learning-based techniques suffer from poor accuracy and generalization to other complex cases. Deep-learning-based vehicle detection algorithms achieved better results due to their powerful learning ability. In this article, we provide a review on vehicle detection from UAV imagery using deep learning techniques. We start by presenting the different types of deep learning architectures, such as convolutional neural networks, recurrent neural networks, autoencoders, generative adversarial networks, and their contribution to improve the vehicle detection task. Then, we focus on investigating the different vehicle detection methods, datasets, and the encountered challenges all along with the suggested solutions. Finally, we summarize and compare the techniques used to improve vehicle detection from UAV-based images, which could be a useful aid to researchers and developers to select the most adequate method for their needs.

25 citations

Journal Article•DOI•
04 Jul 2020
TL;DR: This work designed a process to extract new visual attention biases in the UAV imagery, leading to the definition of a new dictionary of visual biases, and conducts a benchmark on two different datasets, whose results confirm that the 20 defined biases are relevant as a low-complexity saliency prediction system.
Abstract: Unmanned Aerial Vehicle (UAV) imagery is gaining a lot of momentum lately. Indeed, gathered information from a bird-point-of-view is particularly relevant for numerous applications, from agriculture to surveillance services. We herewith study visual saliency to verify whether there are tangible differences between this imagery and more conventional contents. We first describe typical and UAV contents based on their human saliency maps in a high-dimensional space, encompassing saliency map statistics, distribution characteristics, and other specifically designed features. Thanks to a large amount of eye tracking data collected on UAV, we stress the differences between typical and UAV videos, but more importantly within UAV sequences. We then designed a process to extract new visual attention biases in the UAV imagery, leading to the definition of a new dictionary of visual biases. We then conduct a benchmark on two different datasets, whose results confirm that the 20 defined biases are relevant as a low-complexity saliency prediction system.

4 citations

Journal Article•DOI•
TL;DR: In this paper , a detailed methodological framework for collecting microscopic driver and vehicle behaviour data over a long road segment with an application to the entire stretch of a freeway ramp segment using single and multiple unmanned aerial vehicles (UAVs).
Abstract: This paper presents a detailed methodological framework for collecting microscopic driver and vehicle behaviour data over a long road segment with an application to the entire stretch of a freeway ramp segment using single and multiple unmanned aerial vehicles (UAVs). The methodology allows users to collect reliable and complete trajectories of traffic movements in areas with challenging physical characteristics (long road segment, horizontal curvature, changing elevation, and presence of shadow), challenging traffic characteristics (high traffic volume, high speeds, and high-speed changes), and restrictive regulations (UAVs prohibited from hovering over the freeway or the right-of-way). Different UAV setups are recommended and can be used depending on the site conditions. Specific commercial software and procedures used to complete the data collection are explained. The methodology was applied at two ramps and verified with speed data acquired from differential GPS receivers using three different error metrics. The results showed good performance of the proposed methodology, including when aerial videos were taken from oblique angles.

3 citations

References
More filters
Proceedings Article•DOI•
27 Jun 2016
TL;DR: Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background, and outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.
Abstract: We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.

27,256 citations

Posted Content•
TL;DR: Faster R-CNN as discussed by the authors proposes a Region Proposal Network (RPN) to generate high-quality region proposals, which are used by Fast R-NN for detection.
Abstract: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.

23,183 citations

Book Chapter•DOI•
08 Oct 2016
TL;DR: The approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location, which makes SSD easy to train and straightforward to integrate into systems that require a detection component.
Abstract: We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For \(300 \times 300\) input, SSD achieves 74.3 % mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for \(512 \times 512\) input, SSD achieves 76.9 % mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https://github.com/weiliu89/caffe/tree/ssd.

19,543 citations

Proceedings Article•
07 Dec 2015
TL;DR: Ren et al. as discussed by the authors proposed a region proposal network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals.
Abstract: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2% mAP) and 2012 (70.4% mAP) using 300 proposals per image. Code is available at https://github.com/ShaoqingRen/faster_rcnn.

13,674 citations

Book Chapter•DOI•
06 Sep 2014
TL;DR: A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark.
Abstract: Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark Krizhevsky et al. [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.

12,783 citations