scispace - formally typeset
Book ChapterDOI: 10.1007/978-981-10-8354-9_37

Object Tracking Based on Position Vectors and Pattern Matching

01 Jan 2018-pp 407-416
Abstract: Object tracking systems using camera have become an essential requirement in today’s society. In-expensive and high-quality video cameras, availability and demand for analysis of automated video have produced a lot of interest for numerous fields. Almost all conventional algorithms are developed based on background subtraction, frame difference, and static background. They fail to track in environments such as variation in illumination, cluttered background, and occlusions. The image segmentation based object tracking algorithms fail to track in real-time. Feature extraction of an image is an indispensable first step in object tracking applications. In this paper, a novel real-time object tracking based on position and feature vectors is developed. The proposed algorithm involves two phases. The first phase is extraction of features for region of interest object in first frame and nine position features of second frame in video. The second phase is similarity estimation of extracted features of two frames using Euclidean distance. The nearest match is considered by minimum distance between first frame feature vectors and nine different feature vectors of second frame. The proposed algorithm is compared with other existing algorithms using different feature extraction techniques for object tracking in video. The proposed method is simulated and evaluated by statistical, discrete wavelet transform, Radon transform, scale-invariant feature transform and features from accelerated segment test. The performance evaluation shows that the proposed algorithm can be applied for any feature extraction technique and object tracking in video depends on tracking accuracy.

...read more

Topics: Video tracking (72%), Feature extraction (62%), Feature vector (62%) ...read more
Citations
  More

Open access
30 Jul 2018-
Abstract: In today’s modern world of computer vision there are many techniques for object tracking. But still there is great capacity available for further research. A robust technique for object tracking is proposed in this paper. In this work a fusion of global motion estimation and Kalman filter-based tracking algorithm is implemented which detects and tracks all the moving objects in the video. The algorithm detects corners in a frame and tracks the moving ones in the subsequent frames of the input video. The movement of a moving object is traced by persisting the motion trajectory of corner points on that object. Video stabilization is also implemented so that the moving or shaky video can be processed and amended so that Kalman filter can be implemented. The proposed methodology achieved a precision of 94.73 percent which is quite good in comparison to other published techniques.

...read more

Topics: Video tracking (72%), Motion estimation (60%), Kalman filter (56%) ...read more

1 Citations

References
  More

Journal ArticleDOI: 10.1023/B:VISI.0000029664.99615.94
David G. Lowe1Institutions (1)
Abstract: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.

...read more

Topics: 3D single-object recognition (64%), Haar-like features (63%), Feature (computer vision) (58%) ...read more

42,225 Citations


Open accessProceedings ArticleDOI: 10.1109/CVPR.2013.312
Yi Wu1, Jongwoo Lim2, Ming-Hsuan Yang1Institutions (2)
23 Jun 2013-
Abstract: Object tracking is one of the most important components in numerous applications of computer vision. While much progress has been made in recent years with efforts on sharing code and datasets, it is of great importance to develop a library and benchmark to gauge the state of the art. After briefly reviewing recent advances of online object tracking, we carry out large scale experiments with various evaluation criteria to understand how these algorithms perform. The test image sequences are annotated with different attributes for performance evaluation and analysis. By analyzing quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field.

...read more

  • Table 1. Evaluated tracking algorithms (MU: model update, FPS: frames per second). For representation schemes, L: local, H: holistic, T: template, IH: intensity histogram, BP: binary pattern, PCA: principal component analysis, SPCA: sparse PCA, SR: sparse representation, DM: discriminative model, GM: generative model. For search mechanism, PF: particle filter, MCMC: Markov Chain Monte Carlo, LOS: local optimum search, DS: dense sampling search. For the model update, N: No, Y: Yes. In the Code column, M: Matlab, C:C/C++, MC: Mixture of Matlab and C/C++, suffix E: executable binary code.
    Table 1. Evaluated tracking algorithms (MU: model update, FPS: frames per second). For representation schemes, L: local, H: holistic, T: template, IH: intensity histogram, BP: binary pattern, PCA: principal component analysis, SPCA: sparse PCA, SR: sparse representation, DM: discriminative model, GM: generative model. For search mechanism, PF: particle filter, MCMC: Markov Chain Monte Carlo, LOS: local optimum search, DS: dense sampling search. For the model update, N: No, Y: Yes. In the Code column, M: Matlab, C:C/C++, MC: Mixture of Matlab and C/C++, suffix E: executable binary code.
  • Table 2. List of the attributes annotated to test sequences. The threshold values used in this work are also shown.
    Table 2. List of the attributes annotated to test sequences. The threshold values used in this work are also shown.
  • Figure 2. (a) Attribute distribution of the entire testset, and (b) the distribution of the sequences with occlusion (OCC) attribute.
    Figure 2. (a) Attribute distribution of the entire testset, and (b) the distribution of the sequences with occlusion (OCC) attribute.
  • Figure 3. Plots of OPE, SRE, and TRE. The performance score for each tracker is shown in the legend. For each figure, the top 10 trackers are presented for clarity and complete plots are in the supplementary material (best viewed on high-resolution display).
    Figure 3. Plots of OPE, SRE, and TRE. The performance score for each tracker is shown in the legend. For each figure, the top 10 trackers are presented for clarity and complete plots are in the supplementary material (best viewed on high-resolution display).
  • Figure 1. Tracking sequences for evaluation. The first frame with the bounding box of the target object is shown for each sequence. The sequences are ordered based on our ranking results (See supplementary material): the ones on the top left are more difficult for tracking than the ones on the bottom right. Note that we annotated two targets for the jogging sequence.
    Figure 1. Tracking sequences for evaluation. The first frame with the bounding box of the target object is shown for each sequence. The sequences are ordered based on our ranking results (See supplementary material): the ones on the top left are more difficult for tracking than the ones on the bottom right. Note that we annotated two targets for the jogging sequence.
  • + 3

Topics: Video tracking (58%), Tracking system (54%), Benchmark (computing) (53%)

3,290 Citations


Journal ArticleDOI: 10.1109/TPAMI.2011.66
Xue Mei1, Haibin Ling2Institutions (2)
Abstract: In this paper, we propose a robust visual tracking method by casting tracking as a sparse approximation problem in a particle filter framework. In this framework, occlusion, noise, and other challenging issues are addressed seamlessly through a set of trivial templates. Specifically, to find the tracking target in a new frame, each target candidate is sparsely represented in the space spanned by target templates and trivial templates. The sparsity is achieved by solving an l1-regularized least-squares problem. Then, the candidate with the smallest projection error is taken as the tracking target. After that, tracking is continued using a Bayesian state inference framework. Two strategies are used to further improve the tracking performance. First, target templates are dynamically updated to capture appearance changes. Second, nonnegativity constraints are enforced to filter out clutter which negatively resembles tracking targets. We test the proposed approach on numerous sequences involving different types of challenges, including occlusion and variations in illumination, scale, and pose. The proposed approach demonstrates excellent performance in comparison with previously proposed trackers. We also extend the method for simultaneous tracking and recognition by introducing a static template set which stores target images from different classes. The recognition result at each frame is propagated to produce the final result for the whole video. The approach is validated on a vehicle tracking and classification task using outdoor infrared video sequences.

...read more

Topics: Video tracking (66%), Vehicle tracking system (60%), Sparse approximation (54%) ...read more

886 Citations


Open accessProceedings ArticleDOI: 10.1109/CVPR.2012.6247908
16 Jun 2012-
Abstract: In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in MTT. By employing popular sparsity-inducing l p, q mixed norms (p Є {2, ∞} and q = 1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L 1 tracker [15] is a special case of our MTT formulation (denoted as the L 11 tracker) when p = q = 1. The learning problem can be efficiently solved using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, MTT is computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that MTT methods consistently outperform state-of-the-art trackers.

...read more

698 Citations


Proceedings ArticleDOI: 10.1109/CVPR.2011.5995667
Ben Benfold1, Ian Reid1Institutions (1)
20 Jun 2011-
Abstract: The majority of existing pedestrian trackers concentrate on maintaining the identities of targets, however systems for remote biometric analysis or activity recognition in surveillance video often require stable bounding-boxes around pedestrians rather than approximate locations. We present a multi-target tracking system that is designed specifically for the provision of stable and accurate head location estimates. By performing data association over a sliding window of frames, we are able to correct many data association errors and fill in gaps where observations are missed. The approach is multi-threaded and combines asynchronous HOG detections with simultaneous KLT tracking and Markov-Chain Monte-Carlo Data Association (MCM-CDA) to provide guaranteed real-time tracking in high definition video. Where previous approaches have used ad-hoc models for data association, we use a more principled approach based on a Minimal Description Length (MDL) objective which accurately models the affinity between observations. We demonstrate by qualitative and quantitative evaluation that the system is capable of providing precise location estimates for large crowds of pedestrians in real-time. To facilitate future performance comparisons, we make a new dataset with hand annotated ground truth head locations publicly available.

...read more

Topics: Video tracking (63%), Tracking system (53%), Object detection (51%) ...read more

642 Citations


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
20181
Network Information
Related Papers (5)
01 Nov 2014

Hamd Ait Abdelali, Fedwa Essannouni +2 more

26 Aug 2012

Yuanyuan Lu, Xiangyang Xu +2 more

20 May 2016

Swati Sharma, Ajitkumar Khachane +1 more