On line predictive appearance-based tracking
read more
Citations
Hand gesture modelling and recognition involving changing shapes and trajectories, using a Predictive EigenTracker
System and method for object based parametric video coding
Real-Time FPGA-Based Object Tracker with Automatic Pan-Tilt Features for Smart Video Surveillance Systems
A Drifting-proof Framework for Tracking and Online Appearance Learning
Dynamic Hand Gesture Recognition Using Predictive Eigen Tracker.
References
C ONDENSATION —Conditional Density Propagation forVisual Tracking
Active Appearance Models
EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation
ICONDENSATION: Unifying Low-Level and High-Level Tracking in a Stochastic Framework
Finding skin in color images
Related Papers (5)
EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation
Feature Processing and Modeling for 6D Motion Gesture Recognition
Frequently Asked Questions (8)
Q2. What is the common way to estimate the state of a moving object?
The authors use six affine coefficients as elements of the state vector X. A commonly used model for state dynamics is a second order AR process (t represents time): Xt = D2Xt−2 +D1Xt−1 +wt, where wt is a zero-mean, white, Gaussian random vector.
Q3. What are the extensions of the EigenTracker framework?
Existing extensions of the EigenTracker framework include tracking flexible objects [2], and incorporating the notion of shape in an eigenspace – Active Appearance Models (AAMs) [3].
Q4. What is the way to get the seed values for different objects?
In general, one may use motion cues (dominant motion detection) but depending on the particular application, other cues can be used to advantage – in their hand gesture tracker for example, the authors augment motion cues with skin colour cues [6] to segment out the moving hand.
Q5. What is the main factor for the inefficiency of the EigenTracker?
The EigenTracker estimates the affine and reconstruction coefficients after every frame, requiring a good seed value for the nonlinear optimization.
Q6. What is the main idea of the paper?
Section 2 discusses their prediction scheme, eigenspace updates, tracker initialization issues, and the Importance Sampling mechanism.
Q7. What is the main reason why the authors have a predictive EigenTracker?
Their predictive EigenTracker framework is flexible – it can be used to symbiotically augment other trackers with appearance information.
Q8. What is the next step in the tracker?
For all subsequent frames, the next step is obtaining the measurements – optimizing the predicted values of affine coefficients a and reconstruction coefficients c.