scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Revisiting HEp-2 Cell Image Classification

TL;DR: A framework to automate the identification of antigen patterns in the cell images is presented and suggests that the algorithm is comparable with the state-of-the-art approaches.
Abstract: The immune system in homo sapiens protects the body against diseases by identifying and attacking foreign pathogens. However, when the system misidentifies native cells as threats, it results in an auto-immune response. The auto-antibodies generated during this phenomenon may be identified through the indirect immunofluorescence test. An important constituent process of this test is the automated identification of antigen patterns in the cell images, which is the focus of this research. We perform a detailed literature review and present a framework to automate the identification of antigen patterns. The efficacy of the framework, demonstrated on the MIVIA ICPR 2012 HEp-2 Cell Contest and SNP HEp-2 Cell datasets, suggests that the algorithm is comparable with the state-of-the-art approaches.
Citations
More filters
Journal Article
01 Jan 2012-Scopus
TL;DR: This work proposes feature extraction methods for automatic recognition of staining patterns of HEp-2 images to develop a Computer-Aided Diagnosis system and support the specialists' decision.
Abstract: Indirect ImmunoFluorescence (IIF) is currently the recommended method for the detection of antinuclear autoantibodies(ANA). It is an effective technique to reveal the presence of auto immune diseases; however, it is a subjective method and hence dependent on the experience and expertise of the physician. Moreover, inter-observer variability limits the reproducibility of IIF reading. To this end, we propose feature extraction methods for automatic recognition of staining patterns of HEp-2 images (provided as a part of the ICPR 2012 HEp-2 Cells Classification Contest) to develop a Computer-Aided Diagnosis system and support the specialists' decision. We compare the performances of various individual and combined features and show that a combination of HOG(Histogram of Oriented Gradients), Texture and ROI(Region of Interest) features are best suited for our task achieving an overall accuracy of 91.13% using a Support Vector Machine as classifier.

42 citations

Journal ArticleDOI
TL;DR: The results show that the proposed features manage to capture the distinctive characteristics of the different cell types while performing at least as well as the actual deep learning-based state-of-the-art methods in terms of discrimination.
Abstract: The automated and accurate classification of the images portraying the Human Epithelial cells of type 2 (HEp-2) represents one of the most important steps in the diagnosis procedure of many autoimmune diseases. The extreme intra-class variations of the HEp-2 cell images datasets drastically complicates the classification task. We propose in this work a classification framework that, unlike most of the state-of-the-art methods, uses a deep learning-based feature extraction method in a strictly unsupervised way. We propose a deep learning-based hybrid feature learning with two levels of deep convolutional autoencoders. The first level takes the original cell images as the inputs and learns to reconstruct them, in order to capture the features related to the global shape of the cells, and the second network takes the gradients of the images, in order to encode the localized changes in intensity (gray variations) that characterize each cell type. A final feature vector is constructed by combining the latent representations extracted from the two networks, giving a highly discriminative feature representation. The created features will be fed to a nonlinear classifier whose output will represent the type of the cell image. We have tested the discriminability of the proposed features on two of the most popular HEp-2 cell classification datasets, the SNPHEp-2 and ICPR 2016 datasets. The results show that the proposed features manage to capture the distinctive characteristics of the different cell types while performing at least as well as the actual deep learning-based state-of-the-art methods in terms of discrimination.

22 citations

Journal ArticleDOI
TL;DR: The proposed framework can classify 15 types of RBC shapes including normal in an automated manner with a deep AlexNet transfer learning model and shows that the cell's name classification prediction accuracy, sensitivity, specificity, and precision were achieved.
Abstract: Sickle cell anemia (SCA) is a serious hematological disorder, where affected patients are frequently hospitalized throughout a lifetime and even can cause death. The manual method of detecting and classifying abnormal cells of SCA patient blood film through a microscope is time-consuming, tedious, prone to error, and require a trained hematologist. The affected patient has many cell shapes that show important biomechanical characteristics. Hence, having an effective way of classifying the abnormalities present in the SCA disease will give a better insight into managing the concerned patient's life. This work proposed algorithm in two-phase firstly, automation of red blood cells (RBCs) extraction to identify the RBC region of interest (ROI) from the patient’s blood smear image. Secondly, deep learning AlexNet model is employed to classify and predict the abnormalities presence in SCA patients. The study was performed with (over 9,000 single RBC images) taken from 130 SCA patient each class having 750 cells. To develop a shape factor quantification and general multiscale shape analysis. We reveal that the proposed framework can classify 15 types of RBC shapes including normal in an automated manner with a deep AlexNet transfer learning model. The cell's name classification prediction accuracy, sensitivity, specificity, and precision of 95.92%, 77%, 98.82%, and 90% were achieved, respectively.

20 citations


Cites background from "Revisiting HEp-2 Cell Image Classif..."

  • ...The threshold image contains small unwanted particles like debris and noise due to the nature of the dataset [15-17]....

    [...]

Journal ArticleDOI
09 May 2020-Sensors
TL;DR: A deep learning scheme is proposed that performs both the feature extraction and the cells’ discrimination through an end-to-end unsupervised paradigm that uses a deep convolutional autoencoder (DCAE) that performs feature extraction via an encoding–decoding scheme.
Abstract: Classifying the images that portray the Human Epithelial cells of type 2 (HEp-2) represents one of the most important steps in the diagnosis procedure of autoimmune diseases. Performing this classification manually represents an extremely complicated task due to the heterogeneity of these cellular images. Hence, an automated classification scheme appears to be necessary. However, the majority of the available methods prefer to utilize the supervised learning approach for this problem. The need for thousands of images labelled manually can represent a difficulty with this approach. The first contribution of this work is to demonstrate that classifying HEp-2 cell images can also be done using the unsupervised learning paradigm. Unlike the majority of the existing methods, we propose here a deep learning scheme that performs both the feature extraction and the cells' discrimination through an end-to-end unsupervised paradigm. We propose the use of a deep convolutional autoencoder (DCAE) that performs feature extraction via an encoding-decoding scheme. At the same time, we embed in the network a clustering layer whose purpose is to automatically discriminate, during the feature learning process, the latent representations produced by the DCAE. Furthermore, we investigate how the quality of the network's reconstruction can affect the quality of the produced representations. We have investigated the effectiveness of our method on some benchmark datasets and we demonstrate here that the unsupervised learning, when done properly, performs at the same level as the actual supervised learning-based state-of-the-art methods in terms of accuracy.

13 citations

Journal ArticleDOI
TL;DR: A dynamic learning process is conducted with different networks taking different input variations in parallel in order to efficiently homogenize the features extracted from the images that have different intensity levels.
Abstract: The complete analysis of the images representing the human epithelial cells of type 2, commonly referred to as HEp-2 cells, is one of the most important tasks in the diagnosis procedure of various autoimmune diseases. The problem of the automatic classification of these images has been widely discussed since the unfolding of deep learning-based methods. Certain datasets of the HEp-2 cell images exhibit an extreme complexity due to their significant heterogeneity. We propose in this work a method that tackles specifically the problem related to this disparity. A dynamic learning process is conducted with different networks taking different input variations in parallel. In order to emphasize the localized changes in intensity, the discrete wavelet transform is used to produce different versions of the input image. The approximation and detail coefficients are fed to four different deep networks in a parallel learning paradigm in order to efficiently homogenize the features extracted from the images that have different intensity levels. The feature maps from these different networks are then concatenated and passed to the classification layers to produce the final type of the cellular image. The proposed method was tested on a public dataset that comprises images from two intensity levels. The significant heterogeneity of this dataset limits the discrimination results of some of the state-of-the-art deep learning-based methods. We have conducted a comparative study with these methods in order to demonstrate how the dynamic learning proposed in this work manages to significantly minimize this heterogeneity related problem, thus boosting the discrimination results.

9 citations

References
More filters
Journal ArticleDOI
TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Abstract: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.

46,906 citations

Proceedings ArticleDOI
17 Jun 2006
TL;DR: This paper presents a method for recognizing scene categories based on approximate global geometric correspondence that exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories.
Abstract: This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting "spatial pyramid" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralba’s "gist" and Lowe’s SIFT descriptors.

8,736 citations


"Revisiting HEp-2 Cell Image Classif..." refers methods in this paper

  • ...The CPMdescriptor, an adaptation of the Spatial PyramidMatching descriptor [43], is composed of regional histograms of visual words....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors present two approaches for obtaining class probabilities, which can be reduced to linear systems and are easy to implement, and show conceptually and experimentally that the proposed approaches are more stable than the two existing popular methods: voting and the method by Hastie and Tibshirani (1998).
Abstract: Pairwise coupling is a popular multi-class classification method that combines all comparisons for each pair of classes. This paper presents two approaches for obtaining class probabilities. Both methods can be reduced to linear systems and are easy to implement. We show conceptually and experimentally that the proposed approaches are more stable than the two existing popular methods: voting and the method by Hastie and Tibshirani (1998)

1,888 citations

Book ChapterDOI
07 May 2006
TL;DR: A fast method for computation of covariances based on integral images, and the performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariances matrix.
Abstract: We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix.

1,338 citations


Additional excerpts

  • ...[38], is utilized in the BoW framework....

    [...]

Journal Article
TL;DR: In this paper, a fast method for computation of covariance matrices based on integral images is described, which is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations.
Abstract: We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix.

1,057 citations