Open AccessProceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TLDR
In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.Abstract:
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.read more
Citations
More filters
Proceedings ArticleDOI
The Visual Object Tracking VOT2017 Challenge Results
Matej Kristan,Ales Leonardis,Jiri Matas,Michael Felsberg,Roman Pflugfelder,Luka Čehovin Zajc,Tomas Vojir,Gustav Häger,Alan Lukezic,Abdelrahman Eldesokey,Gustavo Fernandez,Alvaro Garcia-Martin,Andrej Muhič,Alfredo Petrosino,Alireza Memarmoghadam,Andrea Vedaldi,Antoine Manzanera,Antoine Tran,A. Aydin Alatan,Bogdan Mocanu,Boyu Chen,Chang Huang,Changsheng Xu,Chong Sun,Dalong Du,David Zhang,Dawei Du,Deepak Mishra,Erhan Gundogdu,Erhan Gundogdu,Erik Velasco-Salido,Fahad Shahbaz Khan,Francesco Battistone,Gorthi R. K. Sai Subrahmanyam,Goutam Bhat,Guan Huang,Guilherme Sousa Bastos,Guna Seetharaman,Hongliang Zhang,Houqiang Li,Huchuan Lu,Isabela Drummond,Jack Valmadre,Jae-chan Jeong,Jaeil Cho,Jae-Yeong Lee,Jana Noskova,Jianke Zhu,Jin Gao,Jingyu Liu,Ji-Wan Kim,João F. Henriques,José M. Martínez,Junfei Zhuang,Junliang Xing,Junyu Gao,Kai Chen,Kannappan Palaniappan,Karel Lebeda,Ke Gao,Kris M. Kitani,Lei Zhang,Lijun Wang,Lingxiao Yang,Longyin Wen,Luca Bertinetto,Mahdieh Poostchi,Martin Danelljan,Matthias Mueller,Mengdan Zhang,Ming-Hsuan Yang,Nianhao Xie,Ning Wang,Ondrej Miksik,Payman Moallem,Pallavi Venugopal M,Pedro Senna,Philip H. S. Torr,Qiang Wang,Qifeng Yu,Qingming Huang,Rafael Martin-Nieto,Richard Bowden,Risheng Liu,Ruxandra Tapu,Simon Hadfield,Siwei Lyu,Stuart Golodetz,Sunglok Choi,Tianzhu Zhang,Titus Zaharia,Vincenzo Santopietro,Wei Zou,Weiming Hu,Wenbing Tao,Wenbo Li,Wengang Zhou,Xianguo Yu,Xiao Bian,Yang Li,Yifan Xing,Yingruo Fan,Zheng Zhu,Zhipeng Zhang,Zhiqun He +104 more
TL;DR: The Visual Object Tracking challenge VOT2017 is the fifth annual tracker benchmarking activity organized by the VOT initiative; results of 51 trackers are presented; many are state-of-the-art published at major computer vision conferences or journals in recent years.
Proceedings ArticleDOI
Visual Translation Embedding Network for Visual Relation Detection
TL;DR: Zhang et al. as discussed by the authors proposed a Visual Translation Embedding Network (VTransE) for visual relation detection, which places objects in a low-dimensional relation space where a relation can be modeled as a simple vector translation, i.e., subject + predicate.
Journal Article
Visual Dialog
Abhishek Das,Satwik Kottur,Khushi Gupta,Avi Singh,Deshraj Yadav,Stefan Lee,Jose M. F. Moura,Devi Parikh,Dhruv Batra +8 more
TL;DR: The authors introduced the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content, given an image, a dialog history and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately.
Posted Content
Neural Module Networks
TL;DR: The authors decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.) for visual question answering.
Journal ArticleDOI
Deep Learning Localizes and Identifies Polyps in Real Time With 96% Accuracy in Screening Colonoscopy.
Gregor Urban,Priyam V. Tripathi,Talal Alkayali,Mohit Mittal,Farid Jalali,William E. Karnes,Pierre Baldi +6 more
TL;DR: The ability of computer-assisted image analysis using convolutional neural networks (CNNs; a deep learning model for image analysis) to improve polyp detection, a surrogate of ADR, is tested and could increase the ADR and decrease interval colorectal cancers but requires validation in large multicenter trials.
References
More filters
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Proceedings ArticleDOI
Going deeper with convolutions
Christian Szegedy,Wei Liu,Yangqing Jia,Pierre Sermanet,Scott Reed,Dragomir Anguelov,Dumitru Erhan,Vincent Vanhoucke,Andrew Rabinovich +8 more
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).