scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Discriminative features for image classification and retrieval

Shang Liu1, Xiao Bai1
01 Apr 2012-Pattern Recognition Letters (North-Holland)-Vol. 33, Iss: 6, pp 744-751
TL;DR: A new method to improve the performance of current bag-of-word based image classification process by introducing a pairwise image matching scheme to select the discriminative features.
About: This article is published in Pattern Recognition Letters.The article was published on 2012-04-01. It has received 44 citations till now. The article focuses on the topics: Visual Word & Automatic image annotation.
Citations
More filters
Journal ArticleDOI
TL;DR: It is proved that the newly-defined entropy meets the common requirement of monotonicity and can equivalently characterize the existing attribute reductions in the fuzzy rough set theory.

259 citations

Book ChapterDOI
01 Jan 2016
TL;DR: This chapter introduces basic notation and mathematical concepts for detecting and describing image features, and discusses properties of perfect features and gives an overview of various existing detection and description methods.
Abstract: Feature detection, description and matching are essential components of various computer vision applications, thus they have received a considerable attention in the last decades. Several feature detectors and descriptors have been proposed in the literature with a variety of definitions for what kind of points in an image is potentially interesting (i.e., a distinctive attribute). This chapter introduces basic notation and mathematical concepts for detecting and describing image features. Then, it discusses properties of perfect features and gives an overview of various existing detection and description methods. Furthermore, it explains some approaches to feature matching. Finally, the chapter discusses the most used techniques for performance evaluation of detection and description algorithms.

202 citations

Journal ArticleDOI
TL;DR: This paper reviews related works based on the issues of improving and/or applying BoW for image annotation to automatically assign keywords to images, so image retrieval users are able to query images by keywords.
Abstract: Content-based image retrieval (CBIR) systems require users to query images by their low-level visual content; this not only makes it hard for users to formulate queries, but also can lead to unsatisfied retrieval results. To this end, image annotation was proposed. The aim of image annotation is to automatically assign keywords to images, so image retrieval users are able to query images by keywords. Image annotation can be regarded as the image classification problem: that images are represented by some low-level features and some supervised learning techniques are used to learn the mapping between low-level features and high-level concepts (i.e., class labels). One of the most widely used feature representation methods is bag-of-words (BoW). This paper reviews related works based on the issues of improving and/or applying BoW for image annotation. Moreover, many recent works (from 2006 to 2012) are compared in terms of the methodology of BoW feature generation and experimental design. In addition, several different issues in using BoW are discussed, and some important issues for future research are discussed.

166 citations

Journal ArticleDOI
TL;DR: This paper presents a novel approach to visual objects classification based on generating simple fuzzy classifiers using local image features to distinguish between one known class and other classes by boosting meta-learning.

133 citations

Journal ArticleDOI
TL;DR: A novel feature selection model with group sparsity, Deep Sparse SVM (DSSVM) that not only can assign a suitable weight to the feature dimensions like the other traditional feature selection models, but also directly exclude useless features from the feature pool.

58 citations


Additional excerpts

  • ...Liu and Bai [36] employ feature selection model for image retrieval....

    [...]

  • ...[36] S. Liu, X. Bai, Discriminative features for image classification and retrieval, Pattern Recognit....

    [...]

References
More filters
Proceedings ArticleDOI
Jia Deng1, Wei Dong1, Richard Socher1, Li-Jia Li1, Kai Li1, Li Fei-Fei1 
20 Jun 2009
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Abstract: The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.

49,639 citations

Journal ArticleDOI
TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Abstract: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.

46,906 citations

Book
01 Jan 1973

20,541 citations

01 Jan 2011
TL;DR: The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images that can then be used to reliably match objects in diering images.
Abstract: The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade.

14,708 citations


"Discriminative features for image c..." refers methods in this paper

  • ...Famous contributions include SIFT (Lowe, 2004), PCA-SIFT (Ke and Sukthankar, 2004), SURF (Bay et al., 2008) and more recently Local Self-Similarity (LSS) (Shechtman and Irani, 2007)....

    [...]

  • ...Famous contributions include SIFT (Lowe, 2004), PCA-SIFT (Ke and Sukthankar, 2004), SURF (Bay et al....

    [...]

Journal ArticleDOI
TL;DR: A novel scale- and rotation-invariant detector and descriptor, coined SURF (Speeded-Up Robust Features), which approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster.

12,449 citations


"Discriminative features for image c..." refers methods in this paper

  • ...Famous contributions include SIFT (Lowe, 2004), PCA-SIFT (Ke and Sukthankar, 2004), SURF (Bay et al., 2008) and more recently Local Self-Similarity (LSS) (Shechtman and Irani, 2007)....

    [...]