scispace - formally typeset
Open AccessJournal ArticleDOI

Focal Loss for Dense Object Detection

Reads0
Chats0
TLDR
Focal loss as discussed by the authors focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training, which improves the accuracy of one-stage detectors.
Abstract
The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https://github.com/facebookresearch/Detectron .

read more

Citations
More filters
Posted Content

Opportunities and Challenges of Deep Learning Methods for Electrocardiogram Data: A Systematic Review

TL;DR: A systematic review of deep learning methods for electrocardiogram (ECG) data from both modeling and application perspectives is presented in this paper, which highlights existing challenges and problems to identify potential future research directions.
Proceedings ArticleDOI

TubeTK: Adopting Tubes to Track Multi-Object in a One-Step Training Model

TL;DR: In this article, the authors propose a concise end-to-end model TubeTK which only needs one step training by introducing the "bounding-tube" to indicate temporal-spatial locations of objects in a short video clip.
Journal ArticleDOI

Cross-Scale Feature Fusion for Object Detection in Optical Remote Sensing Images

TL;DR: This work proposes an end-to-end cross-scale feature fusion (CSFF) framework, which can effectively improve the object detection accuracy in optical remote sensing images and implements it in the framework of Faster region-based CNN.
Journal ArticleDOI

Mapping Landslides on EO Data: Performance of Deep Learning Models vs. Traditional Machine Learning Models

TL;DR: A modified U-Net model is introduced for semantic segmentation of landslides at a regional scale from EO data using ResNet34 blocks for feature extraction and is compared with conventional pixel-based and object-based methods.
Posted Content

Long-tailed Recognition by Routing Diverse Distribution-Aware Experts

TL;DR: RIDE aims to reduce both the bias and the variance of a long-tailed classifier by RoutIng Diverse Experts (RIDE), which significantly outperforms the state-of-the-art methods by 5% to 7% on all the benchmarks including CIFAR100-LT, ImageNet-LT and iNaturalist.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI

Histograms of oriented gradients for human detection

TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Book ChapterDOI

Microsoft COCO: Common Objects in Context

TL;DR: A new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding by gathering images of complex everyday scenes containing common objects in their natural context.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Related Papers (5)