scispace - formally typeset
Open AccessPosted Content

You Only Look Once: Unified, Real-Time Object Detection

TLDR
YOLO as discussed by the authors predicts bounding boxes and class probabilities directly from full images in one evaluation, which can be optimized end-to-end directly on detection performance, and achieves state-of-the-art performance.
Abstract
We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is far less likely to predict false detections where nothing exists. Finally, YOLO learns very general representations of objects. It outperforms all other detection methods, including DPM and R-CNN, by a wide margin when generalizing from natural images to artwork on both the Picasso Dataset and the People-Art Dataset.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

COP: customized correlation-based Filter level pruning method for deep CNN compression

TL;DR: Wang et al. as discussed by the authors proposed a customized correlation-based pruning (COP) method to solve the problem of high redundancy and sub-optimal pruning in filter-level pruning.
Journal ArticleDOI

Image-based fluid data assimilation with deep neural network

TL;DR: A data assimilation approach using an image-processing deep neural network (DNN) as a likelihood function using vortex-induced vibration (VIV) images shows that the amplitude and frequency of the VIV can be approximated from the experimental images.
Journal ArticleDOI

A Real-Time Method to Estimate Speed of Object Based on Object Detection and Optical Flow Calculation

TL;DR: This method manages to estimate multiple objects at real-time speed by only using a normal camera even in moving status, whose error is acceptable in most application fields like manless driving or robot vision.
Proceedings ArticleDOI

DSCnet: Replicating Lidar Point Clouds With Deep Sensor Cloning

TL;DR: Deep Sensor Cloning (DSC) as discussed by the authors uses CNNs in conjunction with inexpensive sensors to replicate the 3D point-clouds that are created by expensive LIDARs.
References
More filters
Proceedings ArticleDOI

Histograms of oriented gradients for human detection

TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Journal ArticleDOI

ImageNet Large Scale Visual Recognition Challenge

TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Posted Content

Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

TL;DR: Faster R-CNN as discussed by the authors proposes a Region Proposal Network (RPN) to generate high-quality region proposals, which are used by Fast R-NN for detection.
Proceedings ArticleDOI

Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation

TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Proceedings ArticleDOI

Object recognition from local scale-invariant features

TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Related Papers (5)