scispace - formally typeset
Proceedings ArticleDOI

Computationally efficient deep tracker: Guided MDNet

Reads0
Chats0
TLDR
The main objective of the paper is to recommend an essential improvement to the existing Multi-Domain Convolutional Neural Network tracker (MDNet) which is used to track unknown object in a video-stream.
Abstract
The main objective of the paper is to recommend an essential improvement to the existing Multi-Domain Convolutional Neural Network tracker (MDNet) which is used to track unknown object in a video-stream. MDNet is able to handle major basic tracking challenges like fast motion, background clutter, out of view, scale variations etc. through offline training and online tracking. We pre-train the Convolutional Neural Network (CNN) offline using many videos with ground truth to obtain a target representation in the network. In online tracking the MDNet uses large number of random sample of windows around the previous target for estimating the target in the current frame which make its tracking computationally complex while testing or obtaining the track. The major contribution of the paper is to give guided samples to the MDNet rather than random samples so that the computation and time required by the CNN while tracking could be greatly reduced. Evaluation of the proposed algorithm is done using the videos from the ALOV300++ dataset and the VOT dataset and the results are compared with the state of art trackers.

read more

Citations
More filters
Journal ArticleDOI

Self-Correction Ship Tracking and Counting with Variable Time Window Based on YOLOv3

TL;DR: In this paper, the YOLOv3 pretraining model is used for ship detection, recognition, and counting in the context of intelligent maritime surveillance, timely ocean rescue, and computer-aided decision-making.
Book ChapterDOI

Design and Implementation of Cloud Service System Based on Face Recognition

TL;DR: A novel face recognition method for population search and criminal pursuit in smart cities and a cloud server architecture for face recognition in smart city environments are proposed.

Incorporating Scene Depth in Discriminative Correlation Filters for Visual Tracking

TL;DR: Visual tracking is a computer vision problem where the task is to follow a target through a video sequence to solve the problem of tracking blindfolded people in the dark.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Proceedings ArticleDOI

Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation

TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Posted Content

Rich feature hierarchies for accurate object detection and semantic segmentation

TL;DR: This paper proposes a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%.
Related Papers (5)