Topologically-constrained latent variable models
read more
Citations
Structural-RNN: Deep Learning on Spatio-Temporal Graphs
Structural-RNN: Deep Learning on Spatio-Temporal Graphs
Recurrent Network Models for Human Dynamics
Recurrent Network Models for Human Dynamics
UniMiB SHAR: A Dataset for Human Activity Recognition Using Acceleration Data from Smartphones
References
Nonlinear dimensionality reduction by locally linear embedding.
A global geometric framework for nonlinear dimensionality reduction.
Motion graphs
Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models
Gaussian Process Dynamical Models for Human Motion
Related Papers (5)
A global geometric framework for nonlinear dimensionality reduction.
Frequently Asked Questions (15)
Q2. How does the LLE preserve topological constraints?
The locally linear embedding (LLE) (Roweis & Saul, 2000) preserves topological constraints by finding a representation based on reconstruction in a low dimensional space with an optimized set of local weightings.
Q3. What is the way to force a cylindrical topology on the latent space?
To force a cylindrical topology on the latent space, the authors can introduce similarity measures based on the phase, specifying different similarity measures for each latent dimension.
Q4. How many gait cycles are in the larger training set?
The larger training set comprises approximately one gait cycle from each of 9 walking and 10 running motions performed by different subjects at different speeds (3 km/h for walking, 6–12 km/h for running).
Q5. How many gait cycles did the authors first consider?
The authors first considered a small training set comprised of 4 gait cycles (2 walks and 2 runs) performed by one subject at different speeds.
Q6. How does the paper demonstrate the effectiveness of the approach?
The authors demonstrate the effectiveness of their approach in a character animation application, where the user specifies a set of constraints (e.g., foot locations), and the remaining kinematic degrees of freedom are infered.
Q7. What is the way to encourage transitions between different sequences?
The authors can encourage transition points of different sequences to be proximal with the following kernel matrix for the back-constraint mapping:ktrans(ti, tj) = ∑m∑lδmlk(ti, t̂m)k(tj , t̂l) (9)where k(ti, t̂l) is an RBF centered at t̂l, and δml = 1 if t̂m and t̂l are in the same set.
Q8. What is the way to generalize the model to unseen styles?
The model can also generalize to styles very differentzfrom the ones in the training set, by imposing constraints that can be satisfied only by motions very different from the training data.
Q9. What is the first step in the GP-LVM?
When one has time-series data, Y represents a sequence of observations, and it is natural to augment the GP-LVM with an explicit dynamical model.
Q10. What is the difference between the two mappings?
These two mappings project onto two dimensions of the latent space, forcing them to have a periodic structure (which comes about through the sinusoidal dependence of the kernel on phase).
Q11. What is the possible mapping of a kernel-based regression model?
One possible mapping is a kernel-based regression model, where regression on a kernel induced feature space provides the mapping,xij = N ∑m=1ajmk(yi,ym) .
Q12. What is the first simulation of the GPDM?
For the first simulation (depicted in green), the model is initialized to a running pose with a latent position not far from walking data.
Q13. how do you learn a walking, running and jumping model?
Although the authors have learned models composed of walking, running and jumping, their framework is general, being applicable in any data sets where there is a large degree of prior knowledge for the problem domain, but the data availability is relatively sparse compared to its complexity.
Q14. How do the authors introduce topological constraints in the latent space?
The authors introduce topological constraints through a prior distribution in the latent space, based on a neighborhood structure learned through a generalized local linear embedding (LLE) (Roweis & Saul, 2000).
Q15. How do the authors force a cylindrical topology on the latent space?
To force a cylindrical topology on the latent space, the authors specify different covariances for each latent dimensionCcosk,j = (cos(φi) − cos(φk)) (cos(φi) − cos(φj)) (7) Csink,j = (sin(φi) − sin(φk)) (sin(φi) − sin(φj)) , (8)with k, j ∈ ηi.