Video mining with frequent itemset configurations
read more
Citations
Total Recall: Automatic Query Expansion with a Generative Feature Model for Object Retrieval
A Survey on Visual Content-Based Video Indexing and Retrieval
Key-segments for video object segmentation
World-scale mining of objects and events from community photo collections
Geometric min-Hashing: Finding a (thick) needle in a haystack
References
Distinctive Image Features from Scale-Invariant Keypoints
Mining association rules between sets of items in large databases
Finding Groups in Data: An Introduction to Cluster Analysis
Finding Groups in Data
Video Google: a text retrieval approach to object matching in videos
Related Papers (5)
Frequently Asked Questions (16)
Q2. What future works have the authors mentioned in the paper "Video mining with frequent itemset configurations" ?
Future works include testing on larger datasets ( e. g. TRECVID ), defining more interestingness measures, and stronger customization of itemset mining algorithms to video data.
Q3. What are the future plans for itemset mining?
Future works include testing on larger datasets (e.g. TRECVID), defining more interestingness measures, and stronger customization of itemset mining algorithms to video data.
Q4. What is the procedure for detecting motion groups?
For shots with considerable motion, the authors use as central words the two words closest to the spatial center of the motion group, and create two transactions covering only visual words within it.
Q5. What is the support of an itemset in the transactions-database D?
The support of an itemset A ∈ D issupport(A) = |{T ∈ D|A ⊆ T}||D| ∈ [0, 1] (1)An itemset A is called frequent in D if support(A) ≥ s where s is a threshold for the minimal support defined by the user.
Q6. How many features can be detected in a video?
The mean time for performing motion segmentation matching + k-means clustering) was typically about 0.4s per frame, but obviously depends on the number of features detected per frame.
Q7. How long is the runtime for the 40-neighborhood case?
While the runtime is very short for both cases, the method is faster for the 40-neighborhood case, because transactions are shorter and only shorter itemsets were frequent.
Q8. How do the authors determine the number of motion groups?
For each motion group, the authors estimate a series of bounding-boxes, containing from 80% progressively up to all regions closest to the spatial median of the group.
Q9. How can the authors use this method to find objects of different sizes?
Restricting the neighborhood by motion grouping has proven to be useful for detecting objects of different sizes at the same time.
Q10. How many transactions are necessary to increase the number of itemsets?
thanks to the careful selection of the central visual words vc, the authors reduce the number of transactions and thus the runtime of the algorithm.
Q11. How does the APriori algorithm find frequent itemsets?
The well known APriori algorithm [1] takes advantage of the monotonicity property and allows us to find frequent itemsets very quickly.
Q12. Why is the neighbourhoods around frequent words infrequent?
This is motivated by the notion that neighbourhoods containing a very infrequent word would create infrequent transactions, neighbourhoods around extremely frequent word have a high probability of being part of clutter.
Q13. How many itemsets are mined in the 40-NN case?
in the 40-NN case, the support threshold to mine even a small set of only 285 frequent itemsets has to be set more than a factor 10 lower.
Q14. How many times do the authors run k-means?
For each remaining timestep, the authors run k-means three times with different values for k, specificallyk(t) ∈ {k(t− 1)− 1, k(t− 1), k(t− 1) + 1} (2) where k(t − 1) is the number of motion groups in the previous timestep.
Q15. What is the way to mine objects?
IXIn conclusion, the authors showed that their mining approach based on frequent itemsets is a suitable and efficient tool for video mining.
Q16. What is the difference between the itemsets?
Since the frequent itemset mining typically returns spatially and temporally overlapping itemsets, the authors merge them with a final clustering stage.