V
Vadim Kantorov
Researcher at French Institute for Research in Computer Science and Automation
Publications - 5
Citations - 683
Vadim Kantorov is an academic researcher from French Institute for Research in Computer Science and Automation. The author has contributed to research in topics: Object detection & Feature extraction. The author has an hindex of 4, co-authored 5 publications receiving 594 citations.
Papers
More filters
Proceedings ArticleDOI
Efficient Feature Extraction, Encoding, and Classification for Action Recognition
Vadim Kantorov,Ivan Laptev +1 more
TL;DR: This work develops highly efficient video features using motion information in video compression and improves the speed of video feature extraction, feature encoding and action classification by two orders of magnitude at the cost of minor reduction in recognition accuracy.
Posted Content
ContextLocNet: Context-Aware Deep Network Models for Weakly Supervised Localization
TL;DR: This work introduces two types of context-aware guidance models, additive and contrastive models, that leverage their surrounding context regions to improve localization in objects in images using image-level supervision only.
Book ChapterDOI
ContextLocNet: Context-Aware Deep Network Models for Weakly Supervised Localization
TL;DR: Zhang et al. as mentioned in this paper introduced two types of context-aware guidance models, additive and contrastive models, that leverage their surrounding context regions to improve localisation performance, which significantly improves weakly supervised object localization and detection.
Posted Content
DETReg: Unsupervised Pretraining with Region Priors for Object Detection
Amir Bar,Xin Wang,Vadim Kantorov,Colorado Reed,Roei Herzig,Gal Chechik,Anna Rohrbach,Trevor Darrell,Amir Globerson +8 more
TL;DR: In this article, an unsupervised pretraining approach for object detection with TRansformers using region priors is presented, which uses pseudo ground truth object bounding boxes from an off-the-shelf region proposal method, Selective Search.