Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition
Kaiming He,Xiangyu Zhang,Shaoqing Ren,Jian Sun +3 more
- pp 346-361
TLDR
This work equips the networks with another pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement, and develops a new network structure, called SPP-net, which can generate a fixed-length representation regardless of image size/scale.Abstract:
Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.read more
Citations
More filters
Proceedings ArticleDOI
Malware Detection with Malware Images using Deep Learning Techniques
Ke He,Dong Seong Kim +1 more
TL;DR: A malware detection system that transforms malware files into image representations and classifies the image representation with CNN is designed and results show that naive SPP implementation is impractical due to memory constraints and greyscale imaging is effective against redundant API injection.
Journal ArticleDOI
Exploiting spatial relation for fine-grained image classification
Lei Qi,Xiaoqiang Lu,Xuelong Li +2 more
TL;DR: Experimental results show that the classification accuracy of the proposed method can reach 85.5% on CUB-200-2011 and 86.9% on FGVC-Aircraft respectively, which exceed comparison methods obviously.
Posted Content
Deep Recurrent Regression for Facial Landmark Detection
TL;DR: A novel end-to-end deep architecture for face landmark detection, based on a deep convolutional and deconvolutional network followed by carefully designed recurrent network structures is proposed.
Journal ArticleDOI
Data-Driven Based Tiny-YOLOv3 Method for Front Vehicle Detection Inducing SPP-Net
TL;DR: A data-driven forward vehicle detection algorithm based on improved tiny-YOLOv3 and the spatial pyramid pooling module is added to increase the number of feature channels to improve the network feature extraction ability to obtain the optimal detection model by multi-scale training of the improved network.
Posted Content
Zero-Shot Object Detection by Hybrid Region Embedding
TL;DR: In this paper, a convex combination of embeddings are used in conjunction with a detection framework to solve the zero-shot object detection (ZSD) problem, where no visual training data is available for some of the target object classes.
References
More filters
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI
Distinctive Image Features from Scale-Invariant Keypoints
TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Journal ArticleDOI
LIBSVM: A library for support vector machines
Chih-Chung Chang,Chih-Jen Lin +1 more
TL;DR: Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.