scispace - formally typeset
Open AccessBook ChapterDOI

Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition

TLDR
This work equips the networks with another pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement, and develops a new network structure, called SPP-net, which can generate a fixed-length representation regardless of image size/scale.
Abstract
Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Deep Feature Flow for Video Recognition

TL;DR: Deep feature flow is presented, a fast and accurate framework for video recognition that runs the expensive convolutional sub-network only on sparse key frames and propagates their deep feature maps to other frames via a flow field and achieves significant speedup as flow computation is relatively fast.
Proceedings ArticleDOI

Taking a deeper look at pedestrians

TL;DR: This paper analyses small and big convnets, their architectural choices, parameters, and the influence of different training data, including pretraining on surrogate tasks, and presents the best convnet detectors on the Caltech and KITTI dataset.
Posted Content

FSSD: Feature Fusion Single Shot Multibox Detector.

TL;DR: This paper proposes FSSD (Feature Fusion Single Shot Multibox Detector), an enhanced SSD with a novel and lightweight feature fusion module which can improve the performance significantly over SSD with just a little speed drop.
Journal ArticleDOI

Stacked Convolutional Denoising Auto-Encoders for Feature Representation

TL;DR: An unsupervised deep network, called the stacked convolutional denoising auto-encoders, which can map images to hierarchical representations without any label information is proposed, which demonstrates superior classification performance to state-of-the-art un supervised networks.
Proceedings ArticleDOI

RON: Reverse Connection with Objectness Prior Networks for Object Detection

TL;DR: RON as mentioned in this paper proposes a reverse connection to detect objects on multi-levels of CNNs, which reduces the searching space of objects by optimizing the reverse connection, objectness prior and object detector jointly by a multi-task loss function.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Distinctive Image Features from Scale-Invariant Keypoints

TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Journal ArticleDOI

LIBSVM: A library for support vector machines

TL;DR: Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
Related Papers (5)