Modeling and analysis of dynamic behaviors of web image collections
read more
Citations
Love Thy Neighbors: Image Annotation by Exploiting Image Metadata
Style-Aware Mid-level Representation for Discovering Visual Connections in Space and Time
Dating historical color images
Reconstructing Storyline Graphs for Image Recommendation from Web Community Photos
References
The Pascal Visual Object Classes (VOC) Challenge
A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking
Information Theory, Inference and Learning Algorithms
Information theory, inference, and learning algorithms
C ONDENSATION —Conditional Density Propagation forVisual Tracking
Related Papers (5)
Frequently Asked Questions (15)
Q2. What is the similarity measure between a pair of images?
The similarity measure between a pair of images is the cosine similarity, which is calculated by the dot product of a pair of L2 normalized descriptors.
Q3. What is the importance sampling method for the transition model?
The importance sampling is particularly useful for the transition model since there is no closed form of the product of Gaussian and Gamma distributions and its normalization is not straightforward.
Q4. What is the advantage of image-based temporal analysis?
Another important advantage of image-based temporal analysis is that it conveys more delicate information that is hardly captured by text descriptions.
Q5. What is the way to capture the subtopic evolution of a web image?
A sequential Monte Carlo based tracker is proposed to capture the subtopic evolution in the form of the similarity network of the image set.
Q6. Why are the text tags highly fluctuated?
the text tags are highly fluctuated mainly because tags are subjectively assigned by different users with little consensus.
Q7. How can the authors use the proposed algorithm?
The proposed algorithm is easily parallelizable by running multiple sequential Monte Carlo trackers with different initialization and parameters.
Q8. What is the significance of temporal association in the neuroscience research?
Wide range of research has supported that the temporal association (i.e. liking temporally close images) is an important mechanism to recognize objects and generalize visual representation. [21] tested several interesting experiments to show that temporally correlated multiple views can be easily linked to a single representation. [2] proposed a learning model for 3D object recognition by using the temporal continuity in image sequences.
Q9. How can the authors observe the affinity changes of each topic in the apple image set?
As Google trends reveal the popularity variation of query terms in the search volumes, the authors can easily observe the affinity changes of each subtopic in the apple image set.
Q10. How do the authors compare the dynamic behaviors detected from images and texts?
In order to compare the dynamic behaviors detected from images and texts, the authors apply the outbreak detection method in previous section to both images and their associated tags.
Q11. What is the purpose of this study?
In order to show the usefulness of the image-based temporal topic modeling, the authors examined subtopic evolution tracking, subtopic outbreak detection, the comparison with the analysis on the associated texts, and the use of temporal association for recognition improvement.
Q12. What is the speed of the analysis of the network?
The analysis of the network is also fast since most network analysis algorithms depend on the number of nonzero elements, which is O(N log N).
Q13. What is the probability of occurrence of a subtopic?
In their interpretation, given an image stream, the authors assume that the occurrence of images of each subtopic follows the Poisson process with β.
Q14. What is the significance of the stationary probability?
The stationary probability is a popular ranking measure, and thus the images with high stationary probabilities can be thought of temporally and visually strengthened images.
Q15. How do the authors generate the top-ranked images for each topic?
For the test sets, the authors downloaded 256 top-ranked images for each topic from Google Image Search by querying the same word in Table 1.