Iris tissue recognition based on GLDM feature extraction and hybrid MLPNN-ICA classifier
TL;DR: A new method of feature extraction and classification based on gray-level difference method and hybrid MLPNN-ICA classifier is proposed, which is implemented on CASIA-Iris V3 dataset and UCI machine learning repository datasets.
Abstract: The use of iris tissue for identification is an accurate and reliable system for identifying people. This method consists of four main processing stages, namely segmentation, normalization, feature extraction, and matching. In this study, a new method of feature extraction and classification based on gray-level difference method and hybrid MLPNN-ICA classifier is proposed. For experimental results, our study is implemented on CASIA-Iris V3 dataset and UCI machine learning repository datasets.
Citations
More filters
TL;DR: The extravagant iris recognition methods that are based on combination of two dimensional Gabor kernel, step filtering, and polynomial filtering for feature extraction and hybrid radial basis function neural network (RBFNN) with genetic algorithm (GA) for matching task are proposed.
Abstract: In the new millennium, with chaotic situation that exist in the world, people are threaten with multifarious terrorist attacks. There have been several intelligent ways in order to recognize and diminish these assaults wisely. Biometric traits have proven to be one of the useful ways for tackling these problems. Among all these traits, the iris recognition systems are appropriate tools for the human identification not only the iris pattern is well-known features, but also it has numerous features such as compactness representation, uniqueness texture, and stability. In spite of the fact that there have been published many approaches in these areas, there are still abundant problems in these approaches like time consuming, and computational complexity. In order to solve these obstacles, we propose the extravagant iris recognition methods that are based on combination of two dimensional Gabor kernel (2-DGK), step filtering (SF) and polynomial filtering (PF) for feature extraction and hybrid radial basis function neural network (RBFNN) with genetic algorithm (GA) for matching task. To assess the performance of the proposed method, we use two benchmarks in our algorithm and implemented it on CASIA-Iris V3, UBIRIS. V1 and UCI machine learning repository datasets. The experimental results of the proposed method reveal that the method is efficient in the iris recognition.
38 citations
TL;DR: In this paper, a comparative study of different convolutional neural network (CNN) architectures by using three different modalities (i.e., gray pixels, optical flow channels and depth maps) on two widely adopted and challenging datasets: TUM-GAID and CASIA-B is presented.
Abstract: People identification in video based on the way they walk (i.e., gait) is a relevant task in computer vision using a noninvasive approach. Standard and current approaches typically derive gait signatures from sequences of binary energy maps of subjects extracted from images, but this process introduces a large amount of non-stationary noise, thus conditioning their efficacy. In contrast, in this paper we focus on the raw pixels, or simple functions derived from them, letting advanced learning techniques to extract relevant features. Therefore, we present a comparative study of different convolutional neural network (CNN) architectures by using three different modalities (i.e., gray pixels, optical flow channels and depth maps) on two widely adopted and challenging datasets: TUM-GAID and CASIA-B. In addition, we perform a comparative study between different early and late fusion methods used to combine the information obtained from each kind of modalities. Our experimental results suggest that (1) the raw pixel values represent a competitive input modality, compared to the traditional state-of-the-art silhouette-based features (e.g., GEI), since equivalent or better results are obtained; (2) the fusion of the raw pixel information with information from optical flow and depth maps allows to obtain state-of-the-art results on the gait recognition task with an image resolution several times smaller than the previously reported results; and (3) the selection and the design of the CNN architecture are critical points that can make a difference between state-of-the-art results or poor ones.
34 citations
TL;DR: Wang et al. as mentioned in this paper used a random projection algorithm to develop and optimize a radiomics-based machine learning model to predict peritoneal metastasis in gastric cancer patients using a small and imbalanced computed tomography (CT) image dataset.
Abstract: Background and Objective Non-invasively predicting the risk of cancer metastasis before surgery can play an essential role in determining which patients can benefit from neoadjuvant chemotherapy. This study aims to investigate and test the advantages of applying a random projection algorithm to develop and optimize a radiomics-based machine learning model to predict peritoneal metastasis in gastric cancer patients using a small and imbalanced computed tomography (CT) image dataset. Methods A retrospective dataset involving CT images acquired from 159 patients is assembled, including 121 and 38 cases with and without peritoneal metastasis, respectively. A computer-aided detection scheme is first applied to segment primary gastric tumor volumes and initially compute 315 image features. Then, five gradients boosting machine (GBM) models embedded with five feature selection methods (including random projection algorithm, principal component analysis, least absolute shrinkage, and selection operator, maximum relevance and minimum redundancy, and recursive feature elimination) along with a synthetic minority oversampling technique, are built to predict the risk of peritoneal metastasis. All GBM models are trained and tested using a leave-one-case-out cross-validation method. Results Results show that the GBM model embedded with a random projection algorithm yields a significantly higher prediction accuracy (71.2%) than the other four GBM models (p Conclusions This study demonstrates that CT images of the primary gastric tumors contain discriminatory information to predict the risk of peritoneal metastasis, and a random projection algorithm is a promising method to generate optimal feature vector, improving the performance of machine learning based prediction models.
24 citations
Posted Content•
TL;DR: It is demonstrated that CT images of the primary gastric tumors contain discriminatory information to predict the risk of peritoneal metastasis, and a random projection algorithm is a promising method to generate optimal feature vector, improving the performance of machine learning based prediction models.
Abstract: Background and Objective: Non-invasively predicting the risk of cancer metastasis before surgery plays an essential role in determining optimal treatment methods for cancer patients (including who can benefit from neoadjuvant chemotherapy). Although developing radiomics based machine learning (ML) models has attracted broad research interest for this purpose, it often faces a challenge of how to build a highly performed and robust ML model using small and imbalanced image datasets. Methods: In this study, we explore a new approach to build an optimal ML model. A retrospective dataset involving abdominal computed tomography (CT) images acquired from 159 patients diagnosed with gastric cancer is assembled. Among them, 121 cases have peritoneal metastasis (PM), while 38 cases do not have PM. A computer-aided detection (CAD) scheme is first applied to segment primary gastric tumor volumes and initially computes 315 image features. Then, two Gradient Boosting Machine (GBM) models embedded with two different feature dimensionality reduction methods, namely, the principal component analysis (PCA) and a random projection algorithm (RPA) and a synthetic minority oversampling technique, are built to predict the risk of the patients having PM. All GBM models are trained and tested using a leave-one-case-out cross-validation method. Results: Results show that the GBM embedded with RPA yielded a significantly higher prediction accuracy (71.2%) than using PCA (65.2%) (p<0.05). Conclusions: The study demonstrated that CT images of the primary gastric tumors contain discriminatory information to predict the risk of PM, and RPA is a promising method to generate optimal feature vector, improving the performance of ML models of medical images.
22 citations
TL;DR: There is still a need to develop a robust physiological-based method to advance and improve the performance of the biometric system, where finger vein, palm vein, fingerprint, face, lips, iris, and retina-based processing methods are focused.
Abstract: Biometric deals with the verification and identification of a person based on behavioural and physiological traits. This article presents recent advances in physiological-based biometric multimodalities, where we focused on finger vein, palm vein, fingerprint, face, lips, iris, and retina-based processing methods. The authors also evaluated the architecture, operational mode, and performance metrics of biometric technology. In this article, the authors summarize and study various traditional and deep learning-based physiological-based biometric modalities. An extensive review of biometric steps of multiple modalities by using different levels such as preprocessing, feature extraction, and classification, are presented in detail. Challenges and future trends of existing conventional and deep learning approaches are explained in detail to help the researcher. Moreover, traditional and deep learning methods of various physiological-based biometric systems are roughly analyzed to evaluate them. The comparison result and discussion section of this article indicate that there is still a need to develop a robust physiological-based method to advance and improve the performance of the biometric system.
21 citations
References
More filters
01 Jan 1998
12,940 citations
"Iris tissue recognition based on GL..." refers background in this paper
...The second dataset, iris flowers dataset, has 150 instances with 4 features, and the last one is wine dataset, which has 178 instances with 13 features [32]....
[...]
25 Jun 2006
TL;DR: This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems of sequence learning and post-processing.
Abstract: Many real-world sequence learning tasks require the prediction of sequences of labels from noisy, unsegmented input data. In speech recognition, for example, an acoustic signal is transcribed into words or sub-word units. Recurrent neural networks (RNNs) are powerful sequence learners that would seem well suited to such tasks. However, because they require pre-segmented training data, and post-processing to transform their outputs into label sequences, their applicability has so far been limited. This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems. An experiment on the TIMIT speech corpus demonstrates its advantages over both a baseline HMM and a hybrid HMM-RNN.
5,188 citations
Journal Article•
TL;DR: A semi-supervised framework that incorporates labeled and unlabeled data in a general-purpose learner is proposed and properties of reproducing kernel Hilbert spaces are used to prove new Representer theorems that provide theoretical basis for the algorithms.
Abstract: We propose a family of learning algorithms based on a new form of regularization that allows us to exploit the geometry of the marginal distribution. We focus on a semi-supervised framework that incorporates labeled and unlabeled data in a general-purpose learner. Some transductive graph learning algorithms and standard methods including support vector machines and regularized least squares can be obtained as special cases. We use properties of reproducing kernel Hilbert spaces to prove new Representer theorems that provide theoretical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we obtain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semi-supervised algorithms are able to use unlabeled data effectively. Finally we have a brief discussion of unsupervised and fully supervised learning within our general framework.
3,919 citations
01 Sep 2007
TL;DR: Applying the proposed algorithm for optimization inspired by the imperialistic competition to some of benchmark cost functions shows its ability in dealing with different types of optimization problems.
Abstract: This paper proposes an algorithm for optimization inspired by the imperialistic competition. Like other evolutionary ones, the proposed algorithm starts with an initial population. Population individuals called country are in two types: colonies and imperialists that all together form some empires. Imperialistic competition among these empires forms the basis of the proposed evolutionary algorithm. During this competition, weak empires collapse and powerful ones take possession of their colonies. Imperialistic competition hopefully converges to a state in which there exist only one empire and its colonies are in the same position and have the same cost as the imperialist. Applying the proposed algorithm to some of benchmark cost functions, shows its ability in dealing with different types of optimization problems.
2,371 citations
01 Mar 1975
TL;DR: Three standard approaches to automatic texture classification make use of features based on the Fourier power spectrum, on second-order gray level statistics, and on first-order statistics of gray level differences, respectively; it was found that the Fouriers generally performed more poorly, while the other feature sets all performned comparably.
Abstract: Three standard approaches to automatic texture classification make use of features based on the Fourier power spectrum, on second-order gray level statistics, and on first-order statistics of gray level differences, respectively. Feature sets of these types, all designed analogously, were used to classify two sets of terrain samples. It was found that the Fourier features generally performed more poorly, while the other feature sets all performned comparably.
1,526 citations