scispace - formally typeset
Search or ask a question

Showing papers on "Linear discriminant analysis published in 2019"


Journal ArticleDOI
Fulin Luo1, Bo Du1, Liangpei Zhang1, Lefei Zhang1, Dacheng Tao2 
TL;DR: Experimental results show that SSHGDA can achieve better classification accuracies in comparison with some state-of-the-art methods and can effectively reveal the complex spatial-spectral structures of HSI and enhance the discriminating power of features for land-cover classification.
Abstract: Hyperspectral image (HSI) contains a large number of spatial-spectral information, which will make the traditional classification methods face an enormous challenge to discriminate the types of land-cover. Feature learning is very effective to improve the classification performances. However, the current feature learning approaches are mostly based on a simple intrinsic structure. To represent the complex intrinsic spatial-spectral of HSI, a novel feature learning algorithm, termed spatial-spectral hypergraph discriminant analysis (SSHGDA), has been proposed on the basis of spatial-spectral information, discriminant information, and hypergraph learning. SSHGDA constructs a reconstruction between-class scatter matrix, a weighted within-class scatter matrix, an intraclass spatial-spectral hypergraph, and an interclass spatial-spectral hypergraph to represent the intrinsic properties of HSI. Then, in low-dimensional space, a feature learning model is designed to compact the intraclass information and separate the interclass information. With this model, an optimal projection matrix can be obtained to extract the spatial-spectral features of HSI. SSHGDA can effectively reveal the complex spatial-spectral structures of HSI and enhance the discriminating power of features for land-cover classification. Experimental results on the Indian Pines and PaviaU HSI data sets show that SSHGDA can achieve better classification accuracies in comparison with some state-of-the-art methods.

268 citations


Journal ArticleDOI
TL;DR: A novel feature extraction method called robust sparse linear discriminant analysis (RSLDA) is proposed to solve the above problems and achieves the competitive performance compared with other state-of-the-art feature extraction methods.
Abstract: Linear discriminant analysis (LDA) is a very popular supervised feature extraction method and has been extended to different variants. However, classical LDA has the following problems: 1) The obtained discriminant projection does not have good interpretability for features; 2) LDA is sensitive to noise; and 3) LDA is sensitive to the selection of number of projection directions. In this paper, a novel feature extraction method called robust sparse linear discriminant analysis (RSLDA) is proposed to solve the above problems. Specifically, RSLDA adaptively selects the most discriminative features for discriminant analysis by introducing the $l_{2,1}$ norm. An orthogonal matrix and a sparse matrix are also simultaneously introduced to guarantee that the extracted features can hold the main energy of the original data and enhance the robustness to noise, and thus RSLDA has the potential to perform better than other discriminant methods. Extensive experiments on six databases demonstrate that the proposed method achieves the competitive performance compared with other state-of-the-art feature extraction methods. Moreover, the proposed method is robust to the noisy data.

261 citations


Journal ArticleDOI
TL;DR: The proposed CDSAE framework comprises two stages with different optimization objectives, which can learn discriminative low-dimensional feature mappings and train an effective classifier progressively, and imposes a local Fisher discriminant regularization on each hidden layer of stacked autoencoder (SAE) to train discrim inative SAE (DSAE).
Abstract: As one of the fundamental research topics in remote sensing image analysis, hyperspectral image (HSI) classification has been extensively studied so far. However, how to discriminatively learn a low-dimensional feature space, in which the mapped features have small within-class scatter and big between-class separation, is still a challenging problem. To address this issue, this paper proposes an effective framework, named compact and discriminative stacked autoencoder (CDSAE), for HSI classification. The proposed CDSAE framework comprises two stages with different optimization objectives, which can learn discriminative low-dimensional feature mappings and train an effective classifier progressively. First, we impose a local Fisher discriminant regularization on each hidden layer of stacked autoencoder (SAE) to train discriminative SAE (DSAE) by minimizing reconstruction error. This stage can learn feature mappings, in which the pixels from the same land-cover class are mapped as nearly as possible and the pixels from different land-cover categories are separated by a large margin. Second, we learn an effective classifier and meanwhile update DSAE with a local Fisher discriminant regularization being embedded on the top of feature representations. Moreover, to learn a compact DSAE with as small number of hidden neurons as possible, we impose a diversity regularization on the hidden neurons of DSAE to balance the feature dimensionality and the feature representation capability. The experimental results on three widely-used HSI data sets and comprehensive comparisons with existing methods demonstrate that our proposed method is effective.

215 citations


Journal ArticleDOI
TL;DR: The experimental results showed that the deep CNN models which require no feature engineering achieved the best recognition performance on temporal and frequency combined features in both valence and arousal dimensions, which is 3.58% higher than the performance of the best traditional BT classifier in valence dimension.
Abstract: In order to improve the accuracy of emotional recognition by end-to-end automatic learning of emotional features in spatial and temporal dimensions of electroencephalogram (EEG), an EEG emotional feature learning and classification method using deep convolution neural network (CNN) was proposed based on temporal features, frequential features, and their combinations of EEG signals in DEAP dataset. The shallow machine learning models including bagging tree (BT), support vector machine (SVM), linear discriminant analysis (LDA), and Bayesian linear discriminant analysis (BLDA) models and deep CNN models were used to make emotional binary classification experiments on DEAP datasets in valence and arousal dimensions. The experimental results showed that the deep CNN models which require no feature engineering achieved the best recognition performance on temporal and frequency combined features in both valence and arousal dimensions, which is 3.58% higher than the performance of the best traditional BT classifier in valence dimension and 3.29% higher than that of BT classifier in arousal dimension.

166 citations


Journal ArticleDOI
TL;DR: A Multi-Class Combined performance metric is proposed to compare various multi-class and binary classification systems through incorporating FAR, DR, Accuracy, and class distribution parameters and a uniform distribution based balancing approach is developed to handle the imbalanced distribution of the minority class instances in the CICIDS2017 network intrusion dataset.
Abstract: The security of networked systems has become a critical universal issue that influences individuals, enterprises and governments. The rate of attacks against networked systems has increased dramatically, and the tactics used by the attackers are continuing to evolve. Intrusion detection is one of the solutions against these attacks. A common and effective approach for designing Intrusion Detection Systems (IDS) is Machine Learning. The performance of an IDS is significantly improved when the features are more discriminative and representative. This study uses two feature dimensionality reduction approaches: (i) Auto-Encoder (AE): an instance of deep learning, for dimensionality reduction, and (ii) Principle Component Analysis (PCA). The resulting low-dimensional features from both techniques are then used to build various classifiers such as Random Forest (RF), Bayesian Network, Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA) for designing an IDS. The experimental findings with low-dimensional features in binary and multi-class classification show better performance in terms of Detection Rate (DR), F-Measure, False Alarm Rate (FAR), and Accuracy. This research effort is able to reduce the CICIDS2017 dataset’s feature dimensions from 81 to 10, while maintaining a high accuracy of 99.6% in multi-class and binary classification. Furthermore, in this paper, we propose a Multi-Class Combined performance metric C o m b i n e d M c with respect to class distribution to compare various multi-class and binary classification systems through incorporating FAR, DR, Accuracy, and class distribution parameters. In addition, we developed a uniform distribution based balancing approach to handle the imbalanced distribution of the minority class instances in the CICIDS2017 network intrusion dataset.

163 citations


Journal ArticleDOI
TL;DR: Bankruptcy predictions through the trained network are shown to have a higher performance compared to methods using decision trees, linear discriminant analysis, support vector machines, multi-layer perceptron, AdaBoost, or Altman's Z′′-score.
Abstract: Convolutional neural networks are being applied to identification problems in a variety of fields, and in some areas are showing higher discrimination accuracies than conventional methods However, applications of convolutional neural networks to financial analyses have only been reported in a small number of studies on the prediction of stock price movements The reason for this seems to be that convolutional neural networks are more suitable for application to images and less suitable for general numerical data including financial statements Hence, in this research, an attempt is made to apply a convolutional neural network to the prediction of corporate bankruptcy, which in most cases is treated as a two-class classification problem We use the financial statements (balance sheets and profit-and-loss statements) of 102 companies that have been delisted from the Japanese stock market due to de facto bankruptcy as well as the financial statements of 2062 currently listed companies over four financial periods In our proposed method, a set of financial ratios are derived from the financial statements and represented as a grayscale image The image generated by this process is utilized for training and testing a convolutional neural network Moreover, the size of the dataset is increased using the weighted averages to create synthetic data points A total of 7520 images for the bankrupt and continuing enterprises classes are used for training the convolutional neural network based on GoogLeNet Bankruptcy predictions through the trained network are shown to have a higher performance compared to methods using decision trees, linear discriminant analysis, support vector machines, multi-layer perceptron, AdaBoost, or Altman’s Z′′-score

155 citations


Journal ArticleDOI
TL;DR: In this paper, a hybrid integration approach of Fisher's linear discriminant function with rotation forest (RFLDA) and bagging ensembles was used for groundwater potential assessment at the Ningtiaota area in Shaanxi, China.
Abstract: Groundwater is a vital water source in the rural and urban areas of developing and developed nations. In this study, a novel hybrid integration approach of Fisher’s linear discriminant function (FLDA) with rotation forest (RFLDA) and bagging (BFLDA) ensembles was used for groundwater potential assessment at the Ningtiaota area in Shaanxi, China. A spatial database with 66 groundwater spring locations and 14 groundwater spring contributing factors was prepared; these factors were elevation, aspect, slope, plan and profile curvatures, sediment transport index, stream power index, topographic wetness index, distance to roads and streams, land use, lithology, soil and normalized difference vegetation index. The classifier attribute evaluation method based on the FLDA model was implemented to test the predictive competence of the mentioned contributing factors. The area under curve, confidence interval at 95%, standard error, Friedman test and Wilcoxon signed-rank test were used to compare and validate the success and prediction competence of the three applied models. According to the achieved results, the BFLDA model showed the most prediction competence, followed by the RFLDA and FLDA models, respectively. The resulting groundwater spring potential maps can be used for groundwater development plans and land use planning.

123 citations


Journal ArticleDOI
TL;DR: The statistical results demonstrate that among applied methods, random forest and quadratic discriminant analysis are, respectively, preferable with the imbalanced and balanced datasets since they show the highest efficiency in predicting the structural responses.

121 citations


Journal ArticleDOI
TL;DR: The experimental results suggest that the proposed automated diagnostic system has the potential to classify PD patients from healthy subjects and in future the proposed method can also be exploited for prodromal and differential diagnosis, which are considered challenging tasks.
Abstract: Objective: Parkinson’s disease (PD) is a serious neurodegenerative disorder. It is reported that most of PD patients have voice impairments. But these voice impairments are not perceptible to common listeners. Therefore, different machine learning methods have been developed for automated PD detection. However, these methods either lack generalization and clinically significant classification performance or face the problem of subject overlap. Methods: To overcome the problems discussed above, we attempt to develop a hybrid intelligent system that can automatically perform acoustic analysis of voice signals in order to detect PD. The proposed intelligent system uses linear discriminant analysis (LDA) for dimensionality reduction and genetic algorithm (GA) for hyperparameters optimization of neural network (NN) which is used as a predictive model. Moreover, to avoid subject overlap, we use leave one subject out (LOSO) validation. Results: The proposed method namely LDA-NN-GA is evaluated in numerical experiments on multiple types of sustained phonations data in terms of accuracy, sensitivity, specificity, and Matthew correlation coefficient. It achieves classification accuracy of 95% on training database and 100% on testing database using all the extracted features. However, as the dataset is imbalanced in terms of gender, thus, to obtain unbiased results, we eliminated the gender dependent features and obtained accuracy of 80% for training database and 82.14% for testing database, which seems to be more unbiased results. Conclusion: Compared with the previous machine learning methods, the proposed LDA-NN-GA method shows better performance and lower complexity. Clinical Impact: The experimental results suggest that the proposed automated diagnostic system has the potential to classify PD patients from healthy subjects. Additionally, in future the proposed method can also be exploited for prodromal and differential diagnosis, which are considered challenging tasks.

104 citations


Journal ArticleDOI
TL;DR: All the evaluations including precision, coherence, stability, and clustering resolution should be taken into consideration when choosing an appropriate tool for cytometry data analysis and decision guidelines are provided for the general reader to more easily choose the most suitable clustering tools.
Abstract: With the expanding applications of mass cytometry in medical research, a wide variety of clustering methods, both semi-supervised and unsupervised, have been developed for data analysis. Selecting the optimal clustering method can accelerate the identification of meaningful cell populations. To address this issue, we compared three classes of performance measures, “precision” as external evaluation, “coherence” as internal evaluation, and stability, of nine methods based on six independent benchmark datasets. Seven unsupervised methods (Accense, Xshift, PhenoGraph, FlowSOM, flowMeans, DEPECHE, and kmeans) and two semi-supervised methods (Automated Cell-type Discovery and Classification and linear discriminant analysis (LDA)) are tested on six mass cytometry datasets. We compute and compare all defined performance measures against random subsampling, varying sample sizes, and the number of clusters for each method. LDA reproduces the manual labels most precisely but does not rank top in internal evaluation. PhenoGraph and FlowSOM perform better than other unsupervised tools in precision, coherence, and stability. PhenoGraph and Xshift are more robust when detecting refined sub-clusters, whereas DEPECHE and FlowSOM tend to group similar clusters into meta-clusters. The performances of PhenoGraph, Xshift, and flowMeans are impacted by increased sample size, but FlowSOM is relatively stable as sample size increases. All the evaluations including precision, coherence, stability, and clustering resolution should be taken into synthetic consideration when choosing an appropriate tool for cytometry data analysis. Thus, we provide decision guidelines based on these characteristics for the general reader to more easily choose the most suitable clustering tools.

100 citations


Journal ArticleDOI
TL;DR: The authors present a nonpeaked discriminant analysis (NPDA) technique, in which cutting L1-norm is adopted as the distance metric, and an efficient iterative algorithm is designed for the optimization of the proposed objective.
Abstract: Of late, there are many studies on the robust discriminant analysis, which adopt L1-norm as the distance metric, but their results are not robust enough to gain universal acceptance. To overcome this problem, the authors of this article present a nonpeaked discriminant analysis (NPDA) technique, in which cutting L1-norm is adopted as the distance metric. As this kind of norm can better eliminate heavy outliers in learning models, the proposed algorithm is expected to be stronger in performing feature extraction tasks for data representation than the existing robust discriminant analysis techniques, which are based on the L1-norm distance metric. The authors also present a comprehensive analysis to show that cutting L1-norm distance can be computed equally well, using the difference between two special convex functions. Against this background, an efficient iterative algorithm is designed for the optimization of the proposed objective. Theoretical proofs on the convergence of the algorithm are also presented. Theoretical insights and effectiveness of the proposed method are validated by experimental tests on several real data sets.

Journal ArticleDOI
TL;DR: This paper presents a novel approach to detect and classify ice thickness based on pattern recognition through guided ultrasonic waves and Machine Learning, and considers four feature extraction methods to validate the results.

Journal ArticleDOI
TL;DR: It is hypothesised that for binary classification using metabolomics data, non-linear machine learning methods will provide superior generalised predictive ability when compared to linear alternatives, in particular when compared with the current gold standard PLS discriminant analysis.
Abstract: Metabolomics is increasingly being used in the clinical setting for disease diagnosis, prognosis and risk prediction. Machine learning algorithms are particularly important in the construction of multivariate metabolite prediction. Historically, partial least squares (PLS) regression has been the gold standard for binary classification. Nonlinear machine learning methods such as random forests (RF), kernel support vector machines (SVM) and artificial neural networks (ANN) may be more suited to modelling possible nonlinear metabolite covariance, and thus provide better predictive models. We hypothesise that for binary classification using metabolomics data, non-linear machine learning methods will provide superior generalised predictive ability when compared to linear alternatives, in particular when compared with the current gold standard PLS discriminant analysis. We compared the general predictive performance of eight archetypal machine learning algorithms across ten publicly available clinical metabolomics data sets. The algorithms were implemented in the Python programming language. All code and results have been made publicly available as Jupyter notebooks. There was only marginal improvement in predictive ability for SVM and ANN over PLS across all data sets. RF performance was comparatively poor. The use of out-of-bag bootstrap confidence intervals provided a measure of uncertainty of model prediction such that the quality of metabolomics data was observed to be a bigger influence on generalised performance than model choice. The size of the data set, and choice of performance metric, had a greater influence on generalised predictive performance than the choice of machine learning algorithm.

Journal ArticleDOI
TL;DR: This work proposes an ensemble learning algorithm for automatically computing the most discriminative subset of EEG channels for internal emotion recognition and describes an EEG channel using kernel-based representations computed from the training EEG recordings.
Abstract: Among various physiological signal acquisition methods for the study of the human brain, EEG (Electroencephalography) is more effective. EEG provides a convenient, non-intrusive, and accurate way of capturing brain signals in multiple channels at fine temporal resolution. We propose an ensemble learning algorithm for automatically computing the most discriminative subset of EEG channels for internal emotion recognition. Our method describes an EEG channel using kernel-based representations computed from the training EEG recordings. For ensemble learning, we formulate a graph embedding linear discriminant objective function using the kernel representations. The objective function is efficiently solved via sparse non-negative principal component analysis and the final classifier is learned using the sparse projection coefficients. Our algorithm is useful in reducing the amount of data while improving computational efficiency and classification accuracy at the same time. The experiments on publicly available EEG dataset demonstrate the superiority of the proposed algorithm over the compared methods.

Journal ArticleDOI
TL;DR: The simulation results based on the five runs of k-fold stratified cross-validation indicate that the proposed method yields superior accuracy (99.66%) as compared to existing schemes.

Journal ArticleDOI
Pengcheng Nie1, Jinnuo Zhang1, Xuping Feng1, Chenliang Yu, Yong He1 
TL;DR: The discriminant analysis model based on the DCNN had the advantages of reducing the labor burden and time required in cross breeding-based progeny selection, which will accelerate the progress of related research.
Abstract: The rapid and efficient selection of eligible hybrid progeny is an important step in cross breeding. However, selecting hybrid offspring that meets specific requirements can be time consuming and expensive. Here, near-infrared hyperspectral imaging technology combined with deep learning was applied to classifying hybrid seeds. The hyperspectral images in the range of 975–1648 nm of a total of 6136 hybrid okra seeds and 4128 hybrid loofah seeds, which both contained six varieties, were collected. A partial least squares discriminant analysis, support vector machine and deep convolutional neural network (DCNN) were used to establish discriminant analysis models, and their performances were compared among the different hybrid seed varieties. The discriminant analysis model based on the DCNN was the most stable and had the highest classification accuracy, greater than 95%. The values of features in the last layer of the DCNN were visualized using t-distribution stochastic neighbor embedding. The discriminant analysis model based on the DCNN had the advantages of reducing the labor burden and time required in cross breeding-based progeny selection, which will accelerate the progress of related research.

Journal ArticleDOI
TL;DR: Overall, it can be concluded that multisensory data accurately identify six grades of tea.
Abstract: The instrumental evaluation of tea quality using digital sensors instead of human panel tests has attracted much attention globally However, individual sensors do not meet the requirements of discriminant accuracy as a result of incomprehensive sensor information Considering the major factors in the sensory evaluation of tea, the study integrated multisensor information, including spectral, image and olfaction feature information; Results: To investigate spectral and image information obtained from hyperspectral spectrometers of different bands, principal components analysis was used for dimension reduction and different types of supervised learning algorithms (linear discriminant analysis, K-nearest neighbour and support vector machine) were selected for comparison Spectral feature information in the near infrared region and image feature information in the visible-near infrared/near infrared region achieved greater accuracy for classification The results indicated that a support vector machine outperformed other methods with respect to multisensor data fusion, which improved the accuracy of evaluating green tea quality compared to using individual sensor data The overall accuracy of the calibration set increased from 75% using optimal single sensor information to 92% using multisensor information, and the overall accuracy of the prediction set increased from 78% to 92%; Conclusion: Overall, it can be concluded that multisensory data accurately identify six grades of tea © 2018 Society of Chemical Industry; © 2018 Society of Chemical Industry

Journal ArticleDOI
TL;DR: This paper proposes a new formulation of linear discriminant analysis via joint inline-formula-norm minimization on objective function to induce robustness, so as to efficiently alleviate the influence of outliers and improve the robustness of proposed method.
Abstract: Dimensionality reduction is a critical technology in the domain of pattern recognition, and linear discriminant analysis (LDA) is one of the most popular supervised dimensionality reduction methods. However, whenever its distance criterion of objective function uses $L_2$ -norm, it is sensitive to outliers. In this paper, we propose a new formulation of linear discriminant analysis via joint $L_{2,1}$ -norm minimization on objective function to induce robustness, so as to efficiently alleviate the influence of outliers and improve the robustness of proposed method. An efficient iterative algorithm is proposed to solve the optimization problem and proved to be convergent. Extensive experiments are performed on an artificial data set, on UCI data sets, and on four face data sets, which sufficiently demonstrates the efficiency of comparing to other methods and robustness to outliers of our approach.

Posted Content
TL;DR: This paper first introduces eigenvalue problem, eigen-decomposition (spectral decomposition), and generalized eigen value problem, then mentions the optimization problems which yield to the eigen Value and generalized Eigenvalue problems.
Abstract: This paper is a tutorial for eigenvalue and generalized eigenvalue problems. We first introduce eigenvalue problem, eigen-decomposition (spectral decomposition), and generalized eigenvalue problem. Then, we mention the optimization problems which yield to the eigenvalue and generalized eigenvalue problems. We also provide examples from machine learning, including principal component analysis, kernel supervised principal component analysis, and Fisher discriminant analysis, which result in eigenvalue and generalized eigenvalue problems. Finally, we introduce the solutions to both eigenvalue and generalized eigenvalue problems.

Journal ArticleDOI
TL;DR: Evaluating supervised machine learning algorithms in the classification of sagittal gait patterns for CP children with spastic diplegia shows that the ANN has the best prediction accuracy and classification performance, and the decision tree is also attractive for clinical applications due to its transparency.

Journal ArticleDOI
TL;DR: This paper proposes a semi-supervised robust discriminative classification method based on the least-squares formulation of linear discriminant analysis to detect sample-outliers and feature-noises simultaneously, using both labeled training and unlabeled testing data.
Abstract: Discriminative methods commonly produce models with relatively good generalization abilities. However, this advantage is challenged in real-world applications (e.g., medical image analysis problems), in which there often exist outlier data points ( sample-outliers ) and noises in the predictor values ( feature-noises ). Methods robust to both types of these deviations are somewhat overlooked in the literature. We further argue that denoising can be more effective, if we learn the model using all the available labeled and unlabeled samples, as the intrinsic geometry of the sample manifold can be better constructed using more data points. In this paper, we propose a semi-supervised robust discriminative classification method based on the least-squares formulation of linear discriminant analysis to detect sample-outliers and feature-noises simultaneously, using both labeled training and unlabeled testing data. We conduct several experiments on a synthetic, some benchmark semi-supervised learning, and two brain neurodegenerative disease diagnosis datasets (for Parkinson’s and Alzheimer’s diseases). Specifically for the application of neurodegenerative diseases diagnosis, incorporating robust machine learning methods can be of great benefit, due to the noisy nature of neuroimaging data. Our results show that our method outperforms the baseline and several state-of-the-art methods, in terms of both accuracy and the area under the ROC curve.

Journal ArticleDOI
TL;DR: A Multi-view Linear Discriminant Analysis Network (MvLDAN) is proposed by seeking a nonlinear discriminant and view-invariant representation shared among multiple views by employing multiple feedforward neural networks and a novel eigenvalue-based multi-view objective function.
Abstract: In many real-world applications, an object can be described from multiple views or styles, leading to the emerging multi-view analysis. To eliminate the complicated (usually highly nonlinear) view discrepancy for favorable cross-view recognition and retrieval, we propose a Multi-view Linear Discriminant Analysis Network (MvLDAN) by seeking a nonlinear discriminant and view-invariant representation shared among multiple views. Unlike existing multi-view methods which directly learn a common space to reduce the view gap, our MvLDAN employs multiple feedforward neural networks (one for each view) and a novel eigenvalue-based multi-view objective function to encapsulate as much discriminative variance as possible into all the available common feature dimensions. With the proposed objective function, the MvLDAN could produce representations possessing: 1) low variance within the same class regardless of view discrepancy, 2) high variance between different classes regardless of view discrepancy, and 3) high covariance between any two views. In brief, in the learned multi-view space, the obtained deep features can be projected into a latent common space in which the samples from the same class are as close to each other as possible (even though they are from different views), and the samples from different classes are as far from each other as possible (even though they are from the same view). The effectiveness of the proposed method is verified by extensive experiments carried out on five databases, in comparison with the 19 state-of-the-art approaches.

Journal ArticleDOI
TL;DR: This paper includes a new approach, applied on the Mini-MIAS dataset of 322 images, involving a pre-processing method and inbuilt feature extraction using K-means clustering for Speed-Up Robust Features (SURF) selection, demonstrating that the accuracy rate of the proposed automated DL method usingK-mean clustering with MSVM is improved as compared with a decision tree model.

Journal ArticleDOI
TL;DR: The results demonstrate that the performances of the proposed methods are superior to existing state-of-the-art cross-view gait recognition approaches.

Journal ArticleDOI
TL;DR: RF-based model with statistical tests for detection of high risk genes showed the best performance for accurate cancer classification in multi-center clinical trials.

Journal ArticleDOI
TL;DR: An end-to-end deep learning framework to realize the training free motor imagery (MI) BCI systems by employing the common space pattern (CSP) extracted from electroencephalography (EEG) as the handcrafted feature and proposing a separated channel convolutional network, here termed SCCN.

Journal ArticleDOI
TL;DR: The study compared the accuracy, sensitivity and specificity of different classifiers along with linear and non-linear features and combination of both and indicated that combination alpha power and RWE showed the highest classification 93.33% accuracy in all the applied classifiers.
Abstract: EEG signals are non-stationary, complex and non-linear signals. During major depressive disorder (MDD) or depression, any deterioration in the brain function is reflected in the EEG signals. In this paper, linear features (band power, inter hemispheric asymmetry) and non-linear features [relative wavelet energy (RWE) and wavelet entropy (WE)] and combination of linear and non-linear features were used to classify depression patients and healthy individuals. In this analysis the data set used is publicly available data set contributed by Mumtaz et al. (Biomed Signal Process Control 31:108–115, 2017b). The dataset consisted of 34 MDD patients and 30 healthy individuals. The classifiers used were multi layered perceptron neural network (MLPNN), radial basis function network (RBFN), linear discriminant analysis (LDA) and quadratic discriminant analysis. When linear feature was used, highest classification accuracy of 91.67% was obtained by alpha power with MLPNN classifier. When non-linear feature was used, both RWE and WE provided highest classification accuracy of 90% with RBFN and LDA classifier, respectively. The highest classification of 93.33% was achieved when combining linear and non-linear feature, i.e., combination alpha power and RWE with MLPNN as well as RBFN classifier. This paper also showed that the combination of non-linear features, i.e., RWE and WE also performed the best with highest classification accuracy of 93.33%. The study compared the accuracy, sensitivity and specificity of different classifiers along with linear and non-linear features and combination of both. The results indicated that combination alpha power and RWE showed the highest classification 93.33% accuracy in all the applied classifiers.

Journal ArticleDOI
TL;DR: The numerous experiments prove that the proposed DLRPP can obtain better recognition accuracy compared with the state-of-the-art feature extraction methods and learn an optimal projection matrix for data dimensionality reduction.

Journal ArticleDOI
TL;DR: A new predictor, OPTICAL, is proposed that uses a combination of common spatial pattern and long short-term memory (LSTM) network for obtaining improved MI EEG signal classification and showed significant improvement in the ability to accurately classify left and right-hand MI tasks on two publically available datasets.
Abstract: Brain-computer interface (BCI) systems having the ability to classify brain waves with greater accuracy are highly desirable. To this end, a number of techniques have been proposed aiming to be able to classify brain waves with high accuracy. However, the ability to classify brain waves and its implementation in real-time is still limited. In this study, we introduce a novel scheme for classifying motor imagery (MI) tasks using electroencephalography (EEG) signal that can be implemented in real-time having high classification accuracy between different MI tasks. We propose a new predictor, OPTICAL, that uses a combination of common spatial pattern (CSP) and long short-term memory (LSTM) network for obtaining improved MI EEG signal classification. A sliding window approach is proposed to obtain the time-series input from the spatially filtered data, which becomes input to the LSTM network. Moreover, instead of using LSTM directly for classification, we use regression based output of the LSTM network as one of the features for classification. On the other hand, linear discriminant analysis (LDA) is used to reduce the dimensionality of the CSP variance based features. The features in the reduced dimensional plane after performing LDA are used as input to the support vector machine (SVM) classifier together with the regression based feature obtained from the LSTM network. The regression based feature further boosts the performance of the proposed OPTICAL predictor. OPTICAL showed significant improvement in the ability to accurately classify left and right-hand MI tasks on two publically available datasets. The improvements in the average misclassification rates are 3.09% and 2.07% for BCI Competition IV Dataset I and GigaDB dataset, respectively. The Matlab code is available at https://github.com/ShiuKumar/OPTICAL.

Journal ArticleDOI
TL;DR: This paper introduces a multilinear subspace human activity recognition scheme that exploits the three radar signal variables: slow- time, fast-time, and Doppler frequency and demonstrates that the proposed algorithm yields the highest overall classification accuracy among spectrogram-based methods.
Abstract: In recent years, radar has been employed as a fall detector because of its effective sensing capabilities and penetration through walls. In this paper, we introduce a multilinear subspace human activity recognition scheme that exploits the three radar signal variables: slow-time, fast-time, and Doppler frequency. The proposed approach attempts to find the optimum subspaces that minimize the reconstruction error for different modes of the radar data cube. A comprehensive analysis of the optimization considerations is performed, such as initialization, number of projections, and convergence of the algorithms. Finally, a boosting scheme is proposed combining the unsupervised multilinear principal component analysis (PCA) with the supervised methods of linear discriminant analysis and shallow neural networks. Experimental results based on real radar data obtained from multiple subjects, different locations, and aspect angles (0 $^{\circ }$ , 30 $^{\circ }$ , 45 $^{\circ }$ , 60 $^{\circ }$ , and 90 $^{\circ }$ ) demonstrate that the proposed algorithm yields the highest overall classification accuracy among spectrogram-based methods including predefined physical features, one- and two-dimensional PCA and convolutional neural networks.