scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Comparison of Pixel N-Grams with Histogram, Haralick's features and Bag-of-Visual-Words for Texture Image Classification

TL;DR: The classification results using Pixel N-gram were significantly better than that using Intensity histogram and Haralick features whereas.
Abstract: Texture image classification is very useful in many domains. It has been tried using statistical., spectral and structural approaches. A novel Pixel N-grams technique has emerged for image feature extraction recently. The aim of this paper is to analyse the efficacy of Pixel N-grams technique for texture image classification in comparison with the traditional techniques namely Intensity histogram., Haralick's features based on co-occurrence matrix and state-of-the-art Bag-of-Visual-Words (BoVW). The experiments were carried out on the benchmark UIUC texture dataset using SVM classifier. The classification performance was compared using Fscore., Recall and Precision. The classification results using Pixel N-gram were significantly better than that using Intensity histogram and Haralick features whereas., they were comparable with the BoVwapproach.
Citations
More filters
Book ChapterDOI
01 Jan 2020
TL;DR: A novel feature extraction and image representation technique ‘Pixel N-grams’ inspired from ‘Character N-rams’ concept in text categorization is described in this chapter, demonstrating promising classification accuracy in addition to reduced computational costs, enabling a new way for mammographic classification on low resource computers.
Abstract: Image classification has wide applications in many fields including medical imaging. A major aspect of classification is to extract features that can correctly represent important variations in an image. Global image features commonly used for classification include Intensity Histograms, Haralick’s features based on Gray-level co-occurrence matrix, Local Binary Patterns and Gabor filters. A novel feature extraction and image representation technique ‘Pixel N-grams’ inspired from ‘Character N-grams’ concept in text categorization is described in this chapter. The classification performance of Pixel N-grams is tested on the various datasets including UIUC texture dataset, binary shapes dataset, miniMIAS dataset of mammography, and real-world high-resolution mammography dataset provided by an Australian radiology practice. The results are compared with other feature extraction techniques such as co-occurrence matrix features, intensity histogram, and bag of visual words. The results demonstrate promising classification accuracy in addition to reduced computational costs, enabling a new way for mammographic classification on low resource computers.

2 citations

References
More filters
Journal ArticleDOI
01 Nov 1973
TL;DR: These results indicate that the easily computable textural features based on gray-tone spatial dependancies probably have a general applicability for a wide variety of image-classification applications.
Abstract: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

20,442 citations


"Comparison of Pixel N-Grams with Hi..." refers methods in this paper

  • ...Haralick’s features [9] based on co-occurrence matrix have been quite successful for texture classification....

    [...]

Journal ArticleDOI
TL;DR: The proposed texture representation is evaluated in retrieval and classification tasks using the entire Brodatz database and a publicly available collection of 1,000 photographs of textured surfaces taken from different viewpoints.
Abstract: This paper introduces a texture representation suitable for recognizing images of textured surfaces under a wide range of transformations, including viewpoint changes and nonrigid deformations. At the feature extraction stage, a sparse set of affine Harris and Laplacian regions is found in the image. Each of these regions can be thought of as a texture element having a characteristic elliptic shape and a distinctive appearance pattern. This pattern is captured in an affine-invariant fashion via a process of shape normalization followed by the computation of two novel descriptors, the spin image and the RIFT descriptor. When affine invariance is not required, the original elliptical shape serves as an additional discriminative feature for texture recognition. The proposed approach is evaluated in retrieval and classification tasks using the entire Brodatz database and a publicly available collection of 1,000 photographs of textured surfaces taken from different viewpoints.

1,185 citations


"Comparison of Pixel N-Grams with Hi..." refers methods in this paper

  • ...edu/ponce_grp [8] has been used for the experiments....

    [...]

  • ...) For BoVW approach the precision and recall results are not given in the paper [8]and hence are not plotted....

    [...]

  • ...The classification results for BoVW were directly taken from this work for comparison purposes [8]....

    [...]

Journal ArticleDOI
TL;DR: This paper considers invariant texture analysis, and approaches whose performances are not affected by translation, rotation, affine, and perspective transform are addressed.

478 citations


"Comparison of Pixel N-Grams with Hi..." refers background in this paper

  • ...Texture can be defined as quantitative measure of arrangement of intensities in an image and can be modeled using statistical, spectral or structural approaches [1, 2]....

    [...]

Journal ArticleDOI
TL;DR: The increasing number of unsolicited e-mail messages (spam) reveals the need for the development of reliable anti-spam filters and the vast majority of content-based techniques rely on word-based repr...
Abstract: The increasing number of unsolicited e-mail messages (spam) reveals the need for the development of reliable anti-spam filters. The vast majority of content-based techniques rely on word-based repr...

100 citations


"Comparison of Pixel N-Grams with Hi..." refers background in this paper

  • ...A novel technique inspired from the character N-gram concept from text retrieval was proposed for mammographic image classification [3] and is called Pixel N-grams....

    [...]

  • ...The image can then be represented with the help of number of occurrences of these N-grams....

    [...]

  • ...Index Terms—Pixel N-grams, image classification, texture, histogram, co-occurrence matrix, Bag-of-Visual-Words I. INTRODUCTION Texture is an important property useful for object identification or regions of interest in an image....

    [...]

  • ...PIXEL N-GRAMS TECHNIQUE Pixel N-grams Technique is inspired from the character Ngram technique from text retrieval context....

    [...]

  • ...Similarly, Pixel N-grams are sequence of intensities of N-consecutive pixels in an image....

    [...]

Proceedings ArticleDOI
03 Dec 2010
TL;DR: This paper proposes a novel texture classification method via patch-based sparse texton learning, inspired by the great success of l1-norm minimization based sparse representation (SR), where the dictionary of textons is learned by applying SR to image patches in the training dataset.
Abstract: Texture classification is a classical yet still active topic in computer vision and pattern recognition. Recently, several new texture classification approaches by modeling texture images as distributions over a set of textons have been proposed. These textons are learned as the cluster centers in the image patch feature space using the K-means clustering algorithm. However, the Euclidian distance based the K-means clustering process may not be able to well characterize the intrinsic feature space of texture textons, which if often embedded into a low dimensional manifold. Inspired by the great success of l 1 -norm minimization based sparse representation (SR), in this paper we propose a novel texture classification method via patch-based sparse texton learning. Specifically, the dictionary of textons is learned by applying SR to image patches in the training dataset. The SR coefficients of the test images over the dictionary are used to construct the histograms for texture classification. Experimental results on benchmark database validate the effectiveness of the proposed method.

48 citations


"Comparison of Pixel N-Grams with Hi..." refers background in this paper

  • ...Texture can be defined as quantitative measure of arrangement of intensities in an image and can be modeled using statistical, spectral or structural approaches [1, 2]....

    [...]