Real-time compressive tracking
read more
Citations
Occlusion-Aware Real-Time Object Tracking
Robust scale-adaptive mean-shift for tracking
Real-Time Object Tracking Via Online Discriminative Feature Selection
Video Tracking Using Learned Hierarchical Features
Hyperspectral Band Selection by Multitask Sparsity Pursuit
References
Rapid object detection using a boosted cascade of simple features
Compressed sensing
Robust Face Recognition via Sparse Representation
Decoding by linear programming
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
Frequently Asked Questions (13)
Q2. What is the tracker for the sylvester sequence?
The generative subspace tracker (e.g., IVT [6]) has been shown to be effective in dealing with large illumination changes while the discriminative tracking method with local features (i.e., MILTrack [8]) has been demonstrated to handle pose variation adequately.
Q3. What is the way to measure the distance between the original signals?
the authors expect R provides a stable embedding that approximately preserves the distance between all pairs of original signals.
Q4. What is the main component of their appearance model?
Their appearance model is generative as the object can be well represented based on the features extracted in the compressive domain.
Q5. How can the authors compute the relative intensity difference in a linear way?
As the coefficients in the measurement matrix can be positive or negative (via (2)), the compressive features compute the relative intensity difference in a way similar to the generalized Haar-like features [8]
Q6. What is the implementation of the tracker?
Their tracker is implemented in MATLAB, which runs at 35 frames per second (FPS) on a Pentium Dual-Core 2.80 GHz CPU with 4 GB RAM.
Q7. What is the purpose of the tracker?
Both their tracker and the MILTrack method are designed to handle object location ambiguity in tracking with classifiers and discriminative features.
Q8. Why does the TLD tracker suffer from the same problem?
Because the TLD tracker relies heavily on the visual information in the first frame to re-detect the object, it also suffers from the same problem.
Q9. How do the authors represent each filtered image as a column vector in Rwh?
the authors represent each filtered image as a column vector in Rwh and then concatenate these vectors as a very high-dimensional multi-scale image feature vector x = (x1, ..., xm)> ∈
Q10. How many times do the authors run the trackers?
Since all of the trackers except for Frag involve randomness, the authors run them 10 times and report the average result for each video clip.
Q11. What is the classification of the generative tracking algorithm?
Tracking algorithms can be generally categorized as either generative [1, 2, 6, 10, 9] or discriminative [3–5, 7, 8] based on their appearance models.
Q12. What is the tracker for the Shaking sequence?
For the Shaking sequence shown in Figure 5(b), when the stage light changes drastically and the pose of the subject changes rapidly as he performs, all the other trackers fail to track the object reliably.
Q13. What is the problem with the appearance model?
As the appearance model is updated with noisy and potentially misaligned examples, this often leads to the tracking drift problem.