Distinctive Image Features from Scale-Invariant Keypoints
Citations
80 citations
Cites methods from "Distinctive Image Features from Sca..."
...When the raw descriptor collection is on the order of terabytes, as is the case when indexing tens of millions of real world image using SIFT [18], then indexing may take days or even weeks....
[...]
...Many query images are visually such that only a very small number of SIFT descriptors can be extracted from their contents, e.g., 1% of the images have less than 8 descriptors....
[...]
...We evaluate index creation and search using an image collection containing roughly 100 million images, this is about 30 billion SIFT descriptors or about 4 terabytes of data....
[...]
...SIFT descriptors were then extracted from these images, resulting in about 30 billion descriptors, i.e. 300 SIFT descriptors per image on average....
[...]
...Getting 100% accuracy is impossible as some image variants have zero SIFT descriptors (too dark e.g.)....
[...]
80 citations
Cites methods from "Distinctive Image Features from Sca..."
...Since we apply the widely used Bag-of-Words (BoW) model with local SIFT [15] features for video representation in our formulation, such selected feature dimensions, i....
[...]
...Since we apply the widely used Bag-of-Words (BoW) model with local SIFT [15] features for video representation in our formulation, such selected feature dimensions, i.e., visual words, correspond to discriminative local visual patterns....
[...]
...For each key frame, we extract 128-dimensional SIFT features [15] over key points and perform BoW quantization to derive the image representations [16]....
[...]
80 citations
Cites background or methods from "Distinctive Image Features from Sca..."
...SSE [17] IDI þ IDvwI OðFEI þ ClusterfvIþ put(vwI )) OðEAESðIÞþ put(CI ) þ get(Idx) þ OðjCBj þ jvwj jRepjÞ Local Color DOPEðIdxÞ þ FEI þ ClusterfvI þ UpdatevwI ðIdxÞ þ EOPEðIdxÞ þ put(Idx)) PKHE [15] IDI+sizeI þ IDvwI OðEPaillierðIÞ þ put(CI )) (OðEPaillier(I) þ put(CI )) – SIFT This Work IDI+sizeI þ IDvwI OðEIES CBIRðIÞ þ put(CI )) OðEIES CBIRðIÞ þ put(CI )) – Global Color the CBIR algorithms used in each work: local color histograms [17], SIFT [33], and global color histograms [34]....
[...]
...In this experiment, PKHE achieved the best result, as expected due to the use of the SIFT retrieval algorithm [33]....
[...]
...grams [17], SIFT [33], and global color histograms [34]....
[...]
...SIFT features were originally designed for object-recognition, and we believe that their use to search by example in image repositories (such as the ones used in our experiments and in the literature) does not leverage its full potential....
[...]
...Retrieval precision results for the PKHE system (in both experiments) were not substantially different from the other systems, even though it uses strong texture-based image features (in particular, SIFT)....
[...]
80 citations
Cites methods from "Distinctive Image Features from Sca..."
...Se et al. [30] used a robust scale-invariant feature transform (SIFT) descriptors to associate features, and Davison et al. [31] employed patch matching algorithm and particle searching strategy for data association....
[...]
...Compared to SIFT [33], Harris corners are more accurate and efficient in textureless environment such as ceilings and walls....
[...]
80 citations
Cites background or methods from "Distinctive Image Features from Sca..."
...(6) Boosted Part Detectors In our model, we represent the image evidence E by a densely computed grid of local image descriptors (e.g., shape context (Belongie et al. 2001) or SIFT, (Lowe 2004)—see Sect....
[...]
...In particular, we compute dense appearance representations based on local image descriptors [4,33,35], and use AdaBoost [19] to train discriminative part classifiers....
[...]
...The first interesting outcome of this experiment is that the original SIFT descriptor did not perform well compared to the results obtained with shape context....
[...]
...On the other hand, it shows that SIFT- and HOG-based detectors fail to benefit from a richer image description, which is perhaps due to the fact that properties such as texture do not generalize well across object instances....
[...]
...We compare the performance of shape context descriptors as previously used in [2] with SIFT descriptors [33], and edge templates obtained using the code from [38] and integrated into our pose estimation framework....
[...]
References
46,906 citations
16,989 citations
13,993 citations
7,057 citations
3,422 citations