Topic
Linear discriminant analysis
About: Linear discriminant analysis is a research topic. Over the lifetime, 18361 publications have been published within this topic receiving 603195 citations. The topic is also known as: Linear discriminant analysis & LDA.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this paper, a comparative study using three different chemometric techniques to evaluate both spatial and temporal changes in Suquia River water quality, with a special emphasis on the improvement obtained using discriminant analysis for such evaluation.
859 citations
••
TL;DR: A two-phase KFD framework is developed, i.e., kernel principal component analysis (KPCA) plus Fisher linear discriminant analysis (LDA), which provides novel insights into the nature of KFD.
Abstract: This paper examines the theory of kernel Fisher discriminant analysis (KFD) in a Hilbert space and develops a two-phase KFD framework, i.e., kernel principal component analysis (KPCA) plus Fisher linear discriminant analysis (LDA). This framework provides novel insights into the nature of KFD. Based on this framework, the authors propose a complete kernel Fisher discriminant analysis (CKFD) algorithm. CKFD can be used to carry out discriminant analysis in "double discriminant subspaces." The fact that, it can make full use of two kinds of discriminant information, regular and irregular, makes CKFD a more powerful discriminator. The proposed algorithm was tested and evaluated using the FERET face database and the CENPARMI handwritten numeral database. The experimental results show that CKFD outperforms other KFD algorithms.
856 citations
••
TL;DR: The RF methodology is attractive for use in classification problems when the goals of the study are to produce an accurate classifier and to provide insight regarding the discriminative ability of individual predictor variables.
854 citations
•
01 Jan 2015
TL;DR: Cumming et al. as discussed by the authors presented a review of basic statistics for SPSS and other useful procedures, including the following: 1. Introduction 2. Data Coding and Exploratory Analysis (EDA) 3. Imputation of Missing Data 4. Several Measures of Reliability 5. Selecting and Interpreting Inferential Statistics 7. Multiple Regression 8. Mediation, Moderation, and Canonical Correlation 9. Logistic Regression and Discriminant Analysis 10. Factorial ANOVA and ANCOVA 11. Repeated-Me
Abstract: 1. Introduction 2. Data Coding and Exploratory Analysis (EDA) 3. Imputation of Missing Data 4. Several Measures of Reliability 5. Exploratory Factor Analysis and Principal Components Analysis 6. Selecting and Interpreting Inferential Statistics 7. Multiple Regression 8. Mediation, Moderation, and Canonical Correlation 9. Logistic Regression and Discriminant Analysis 10. Factorial ANOVA and ANCOVA 11. Repeated-Measures and Mixed ANOVAs 12. Multivariate Analysis of Variance (MANOVA) 13. Multilevel Linear Modeling/Hierarchical Linear Modeling Appendix A. Getting Started With SPSS and Other Useful Procedures D. Quick, M. Myers Appendix B. Review of Basic Statistics J.M. Cumming, A. Weinberg Appendix C. Answers to Odd Interpretation Questions
854 citations
••
TL;DR: This work presents a new approach to feature extraction in which feature selection and extraction and classifier training are performed simultaneously using a genetic algorithm, and employs this technique in combination with the k nearest neighbor classification rule.
Abstract: Pattern recognition generally requires that objects be described in terms of a set of measurable features. The selection and quality of the features representing each pattern affect the success of subsequent classification. Feature extraction is the process of deriving new features from original features to reduce the cost of feature measurement, increase classifier efficiency, and allow higher accuracy. Many feature extraction techniques involve linear transformations of the original pattern vectors to new vectors of lower dimensionality. While this is useful for data visualization and classification efficiency, it does not necessarily reduce the number of features to be measured since each new feature may be a linear combination of all of the features in the original pattern vector. Here, we present a new approach to feature extraction in which feature selection and extraction and classifier training are performed simultaneously using a genetic algorithm. The genetic algorithm optimizes a feature weight vector used to scale the individual features in the original pattern vectors. A masking vector is also employed for simultaneous selection of a feature subset. We employ this technique in combination with the k nearest neighbor classification rule, and compare the results with classical feature selection and extraction techniques, including sequential floating forward feature selection, and linear discriminant analysis. We also present results for the identification of favorable water-binding sites on protein surfaces.
849 citations