scispace - formally typeset
Open AccessJournal ArticleDOI

Focal Loss for Dense Object Detection

Reads0
Chats0
TLDR
Focal loss as discussed by the authors focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training, which improves the accuracy of one-stage detectors.
Abstract
The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https://github.com/facebookresearch/Detectron .

read more

Citations
More filters
Proceedings ArticleDOI

Categorical Depth Distribution Network for Monocular 3D Object Detection

TL;DR: Categorical Depth Distribution Network (CaDDN) as mentioned in this paper uses a predicted categorical depth distribution for each pixel to project rich contextual feature information to the appropriate depth interval in 3D space.
Posted Content

MediaPipe Hands: On-device Real-time Hand Tracking

TL;DR: A real-time on-device hand tracking pipeline that predicts hand skeleton from single RGB camera for AR/VR applications through MediaPipe, a framework for building cross-platform ML solutions.
Posted Content

Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning

TL;DR: This work shows that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrates that mixup augmentation and setting a minimum number of labeled samples per mini-batch are effective regularization techniques for reducing it.
Proceedings ArticleDOI

Segmentation-Driven 6D Object Pose Estimation

TL;DR: This paper introduces a segmentation-driven 6D pose estimation framework where each visible part of the objects contributes a local pose prediction in the form of 2D keypoint locations and uses a predicted measure of confidence to combine these pose candidates into a robust set of 3D-to-2D correspondences.
Journal ArticleDOI

CropDeep: The Crop Vision Dataset for Deep-Learning-Based Classification and Detection in Precision Agriculture.

TL;DR: The CropDeep species classification and detection dataset, consisting of 31,147 images with over 49,000 annotated instances from 31 different classes, is presented and it is suggested that the YOLOv3 network has good potential application in agricultural detection tasks.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI

Histograms of oriented gradients for human detection

TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Book ChapterDOI

Microsoft COCO: Common Objects in Context

TL;DR: A new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding by gathering images of complex everyday scenes containing common objects in their natural context.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Related Papers (5)