Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition
Kaiming He,Xiangyu Zhang,Shaoqing Ren,Jian Sun +3 more
- pp 346-361
TLDR
This work equips the networks with another pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement, and develops a new network structure, called SPP-net, which can generate a fixed-length representation regardless of image size/scale.Abstract:
Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.read more
Citations
More filters
Journal ArticleDOI
BING: Binarized normed gradients for objectness estimation at 300fps
TL;DR: To improve localization quality of the proposals while maintaining efficiency, a novel fast segmentation method is proposed and demonstrated its effectiveness for improving BING’s localization performance, when used in multi-thresholding straddling expansion (MTSE) post-processing.
Posted Content
PIXOR: Real-time 3D Object Detection from Point Clouds
TL;DR: PIXOR as discussed by the authors is a proposal-free, single-stage detector that outputs oriented 3D object estimates decoded from pixel-wise neural network predictions, which is designed to balance high accuracy and real-time efficiency.
Proceedings ArticleDOI
Deep learning features at scale for visual place recognition
Zetao Chen,Adam Jacobson,Niko Sünderhauf,Ben Upcroft,Lingqiao Liu,Chunhua Shen,Ian Reid,Michael Milford +7 more
TL;DR: This paper trains, at large scale, two CNN architectures for the specific place recognition task and employs a multi-scale feature encoding method to generate condition- and viewpoint-invariant features.
Posted Content
Understanding image representations by measuring their equivariance and equivalence
Karel Lenc,Andrea Vedaldi +1 more
TL;DR: In this article, the authors investigate three key mathematical properties of representations: equivariance, invariance, and equivalence, and apply these properties to CNNs to reveal insightful aspects of their structure.
Posted Content
Cross Modal Distillation for Supervision Transfer
TL;DR: This work uses learned representations from a large labeled modality as supervisory signal for training representations for a new unlabeled paired modality and can be used as a pre-training procedure for new modalities with limited labeled data.
References
More filters
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI
Distinctive Image Features from Scale-Invariant Keypoints
TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Journal ArticleDOI
LIBSVM: A library for support vector machines
Chih-Chung Chang,Chih-Jen Lin +1 more
TL;DR: Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.