scispace - formally typeset
Open AccessPosted Content

Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

TLDR
Faster R-CNN as discussed by the authors proposes a Region Proposal Network (RPN) to generate high-quality region proposals, which are used by Fast R-NN for detection.
Abstract
State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.

read more

Citations
More filters
Book ChapterDOI

Transductive Semi-Supervised Deep Learning using Min-Max Features

TL;DR: The TSSDL method applies transductive learning principle to DCNN training, introduces confidence levels on unlabeled image samples to overcome unreliable label estimates on outliers and uncertain samples, and develops the Min-Max Feature regularization that encourages DCNN to learn feature descriptors with better between-class separability and within-class compactness.
Proceedings ArticleDOI

BirdNet: A 3D Object Detection Framework from LiDAR Information

TL;DR: LiDAR-based 3D object detection pipeline is presented in this article, where LiDAR information is projected into a novel cell encoding for bird's eye view projection and both object location on the plane and its heading are estimated through a convolutional neural network originally designed for image processing.
Journal ArticleDOI

Real-time Detection of Steel Strip Surface Defects Based on Improved YOLO Detection Network

TL;DR: Wang et al. as discussed by the authors improved the You Only Look Once (YOLO) network and made it all convolutional, which consists of 27 convolution layers, providing an end-to-end solution for the surface defects detection of steel strip.
Proceedings ArticleDOI

WoodScape: A Multi-Task, Multi-Camera Fisheye Dataset for Autonomous Driving

TL;DR: The first extensive fisheye automotive dataset, WoodScape, named after Robert Wood, which comprises of four surround view cameras and nine tasks including segmentation, depth estimation, 3D bounding box detection and soiling detection is released.
Posted Content

12-in-1: Multi-Task Vision and Language Representation Learning

TL;DR: This work develops a large-scale, multi-task model that culminates in a single model on 12 datasets from four broad categories of task including visual question answering, caption-based image retrieval, grounding referring expressions, and multimodal verification and shows that finetuning task-specific models from this model can lead to further improvements, achieving performance at or above the state-of-the-art.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Journal ArticleDOI

ImageNet Large Scale Visual Recognition Challenge

TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Related Papers (5)