scispace - formally typeset
Search or ask a question

Showing papers on "Linear discriminant analysis published in 1998"


Book
Margaret A. Nemeth1
06 Feb 1998
TL;DR: An overview of applied multivariate methods, Matrix results, quadratic forms, eigenvalues and eigenvectors, distances and angles, miscellaneous results work attitudes survey, data file structure, SPSS data entry commands, SAS data entry command study.
Abstract: 1. Applied multivariate methods. An Overview of Multivariate Methods. Two Examples. Types of Variables. Data Matrices and Vectors. The Multivariate Normal Distribution. Statistical Computing. Multivariate Outliers. Multivariate Summary Statistics. Standardized Data and/or z-Scores. Exercises. 2. Sample correlations. Statistical Tests and Confidence Intervals. Summary. Exercises. 3. Multivariate data plots. Three-Dimensional Data Plots. Plots of Higher Dimensional Data. Plotting to Check for Multivariate Normality. Exercises. 4. Eigenvalues and eigenvectors. Trace and Determinant. Eigenvalues. Eigenvectors. Geometrical Descriptions (p=2). Geometrical Descriptions (p=3). Geometrical Descriptions (p>3). Exercises. 5. Principal components analysis. Reasons For Doing a PCA. Objectives of a PCA. PCA on the Variance-Covariance Matrix, Sigma. Estimation of Principal Components. Determining the Number of Principal Components. Caveats. PCA on the Correlation Matrix, P. Testing for Independence of the Original Variables. Structural Relationships. Statistical Computing Packages. Exercises. 6. Factor analysis. Objectives of an FA. Caveats. Some History on Factor Analysis. The Factor Analysis Model. Factor Analysis Equations. Solving the Factor Analysis Equations. Choosing the Appropriate Number of Factors. Computer Solutions of the Factor Analysis Equations. Rotating Factors. Oblique Rotation Methods. Factor Scores. Exercises. 7. Discriminant analysis. Discrimination for Two Multivariate Normal Populations. Cost Functions and Prior Probabilities (Two Populations). A General Discriminant Rule (Two Populations). Discriminant Rules (More Than Two Populations). Variable Selection Procedures. Canonical Discriminant Functions. Nearest Neighbour Discriminant Analysis. Classification Trees. Exercises. 8. Logistic regression methods. The Logit Transformation. Logistic Discriminant Analysis (More than Two Populations.) Exercises. 9. Cluster analysis. Measures of Similarity and/or Dissimilarity. Graphical Aids in Clustering. Clustering Methods. Multidimensional Scaling. Exercises. 10. Mean vectors and variance-covariance matrices. Inference Procedures for Variance-Covariance Matrices. Inference Procedures for a Mean Vector. Two Sample Procedures. Profile Analyses. Additional Two Groups Analyses. Exercises. 11. Multivariate analysis of variance. manova. Dimensionality of the Alternative Hypothesis. Canonical Variates Analysis. Confidence Regions for Canonical Variates. Exercises. 12. Prediction models and multivariate regression. Multiple Regression. Canonical Correlation Analysis. Factor Analysis and Regression. Exercises. Appendices: Matrix results, quadratic forms, eigenvalues and eigenvectors, distances and angles, miscellaneous results work attitudes survey, data file structure, SPSS data entry commands, SAS data entry commands family control study.

982 citations


Book ChapterDOI
14 Apr 1998
TL;DR: A hybrid classifier using PCA and LDA provides a useful framework for other image recognition tasks as well and demonstrates a significant improvement when principal components rather than original images are fed to the LDA classifier.
Abstract: In this paper we describe a face recognition method based on PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The method consists of two steps: first we project the face image from the original vector space to a face subspace via PCA, second we use LDA to obtain a best linear classifier. The basic idea of combining PCA and LDA is to improve the generalization capability of LDA when only few samples per class are available. Using PCA, we are able to construct a face subspace in which we apply LDA to perform classification. Using FERET dataset we demonstrate a significant improvement when principal components rather than original images are fed to the LDA classifier. The hybrid classifier using PCA and LDA provides a useful framework for other image recognition tasks as well.

670 citations


01 Jan 1998
TL;DR: A hybrid classifier using PCA and LDA provides a useful framework for other image recognition tasks as well and demonstrates a significant improvement when principal components rather than original images are fed to the LDA classifier.
Abstract: In this paper we describe a face recognition method based on PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The method consists of two steps: first we project the face image from the original vector space to a face subspace via PCA, second we use LDA to obtain a linear classifier. The basic idea of combining PCA and LDA is to improve the generalization capability of LDA when only few samples per class are available. Using FERET dataset we demonstrate a significant improvement when principal components rather than original images are fed to the LDA classifier. The hybrid classifier using PCA and LDA provides a useful framework for other image recognition tasks as well.

539 citations


Journal ArticleDOI
TL;DR: Theoretical results to the problem of speech recognition are applied and word-error reduction in systems that employed both diagonal and full covariance heteroscedastic Gaussian models tested on the TI-DIGITS database is observed.

384 citations


Journal ArticleDOI
TL;DR: The two types of experiments showed GA to be a very effective instrument for insolvency diagnosis, even if the results obtained with LDA analysis perhaps proved to be superior to those obtained from GA.
Abstract: This study analyses the comparison between a traditional statistical methodology for bankruptcy classification and prediction, i.e. linear discriminant analysis (LDA), and an artificial intelligence algorithm known as Genetic Algorithm (GA). The study was carried out at Centrale dei Bilanci, in Turin, Italy, analysing 1920 unsound and 1920 sound industrial Italian companies from 1982–1995. This paper follows our earlier examination of neural networks (NN) (see Altman et al., 1994 . Corporate distress diagnosis: Comparisons using discriminant analysis and neural network. Journal of Banking and Finance XVIII, 505–529). The experiments on GA were oriented along two different lines: the genetic generation of linear functions and the genetic generation of scores based on rules. The two types of experiments showed GA to be a very effective instrument for insolvency diagnosis, even if the results obtained with LDA analysis perhaps proved to be superior to those obtained from GA. Of particular interest, it should be noted that the results of GA were obtained in less time and with more limited contributions from the financial analyst than the LDA. Of additional interest is the relevance for credit risk management of financial institutions.

259 citations


Journal ArticleDOI
TL;DR: An empirically derived rule based system is compared with two automated methods, linear discriminant analysis and a learning vector quantiser artificial neural network, to classify the objects as microaneurysms or otherwise.

233 citations


Journal ArticleDOI
TL;DR: The results of this study indicate the potential of using combined morphological and texture features for computer-aided classification of microcalcifications.
Abstract: We are developing computerized feature extraction and classification methods to analyze malignant and benign microcalcifications on digitized mammograms. Morphological features that described the size, contrast, and shape of microcalcifications and their variations within a cluster were designed to characterize microcalcifications segmented from the mammographic background. Texture features were derived from the spatial gray-level dependence (SGLD) matrices constructed at multiple distances and directions from tissue regions containing microcalcifications. A genetic algorithm (GA) based feature selection technique was used to select the best feature subset from the multi-dimensional feature spaces. The GA-based method was compared to the commonly used feature selection method based on the stepwise linear discriminant analysis (LDA) procedure. Linear discriminant classifiers using the selected features as input predictor variables were formulated for the classification task. The discriminant scores output from the classifiers were analyzed by receiver operating characteristic (ROC) methodology and the classification accuracy was quantified by the area, A z , under the ROC curve. We analyzed a data set of 145 mammographic microcalcification clusters in this study. It was found that the feature subsets selected by the GA-based method are comparable to or slightly better than those selected by the stepwise LDA method. The texture features (A z =0.84) were more effective than morphological features (A z =0.79) in distinguishing malignant and benign microcalcifications. The highest classification accuracy (A z =0.89) was obtained in the combined texture and morphological feature space. The improvement was statistically significant in comparison to classification in either the morphological (p=0.002) or the texture (p=0.04) feature space alone. The classifier using the best feature subset from the combined feature space and an appropriate decision threshold could correctly identify 35% of the benign clusters without missing a malignant cluster. When the average discriminant score from all views of the same cluster was used for classification, the A z value increased to 0.93 and the classifier could identify 50% of the benign clusters at 100% sensitivity for malignancy. Alternatively, if the minimum discriminant score from all views of the same cluster was used, the A z value would be 0.90 and a specificity of 32% would be obtained at 100% sensitivity. The results of this study indicate the potential of using combined morphological and texture features for computer-aided classification of microcalcifications.

228 citations


Journal ArticleDOI
TL;DR: An asymptotic formula for the expected (generalization) error of the Fisher classifier with the pseudo-inversion is derived which explains the peaking behaviour: with an increasing number of learning observations from one up to the number of features, the generalization error first decreases, and then starts to increase.

207 citations


Proceedings Article
01 Dec 1998
TL;DR: A new LDA-based face recognition system is presented in this paper, where the most expressive vectors derived in the null space of the within-class scatter matrix using principal component analysis (PCA) are equal to the optimal discriminant vectors derived using LDA.
Abstract: A new LDA-based face recognition system is presented in this paper. Linear discriminant analysis (LDA) is one of the most popular linear projection techniques for feature extraction. The major drawback of applying LDA is that it may encounter the small sample size problem. In this paper, we propose a new LDA-based technique which can solve the small sample size problem. We also prove that the most expressive vectors derived in the null space of the within-class scatter matrix using principal component analysis (PCA) are equal to the optimal discriminant vectors derived in the original space using LDA. The experimental results show that the new LDA process improves the performance of a face recognition system signi"cantly. ( 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

185 citations


Book
01 Apr 1998
TL;DR: The High School and Beyond Data Set and SPSS 7.5 Measurement and Descriptive Statistics Data Entry, Checking Data and Data Descriptives More descriptive statistics, and Checking the Normal Distribution Data Transformations - Count, Recode, Compute Selecting and Interpreting Inferential Statistics Crosstabulation and Nonparametric Association Correlation and Scatterplots Factor Analysis - Data Reduction with Principle Components Analysis Several Measures of Reliability Multiple Regression Logistic Regression and Discriminant Analysis Independent and Paired Samples t Tests and Equivalent Non-Param
Abstract: Research Problem, Approaches and Questions Overview of the High School and Beyond Data Set and SPSS 7.5 Measurement and Descriptive Statistics Data Entry, Checking Data and Descriptives More Descriptive Statistics, and Checking the Normal Distribution Data Transformations - Count, Recode, Compute Selecting and Interpreting Inferential Statistics Crosstabulation and Nonparametric Association Correlation and Scatterplots Factor Analysis - Data Reduction with Principle Components Analysis Several Measures of Reliability Multiple Regression Logistic Regression and Discriminant Analysis Independent and Paired Samples t Tests and Equivalent Non-Parametric Tests One-Way ANOVA with Multiple Comparisons for Between Groups Designs Factorial ANOVA, Including Interactions and ANOVA Repeated Measures and Mixed ANOVA Multivariate Analysis of Variance (MANOVA).

183 citations


Journal ArticleDOI
TL;DR: The use of receiver operating characteristic curves for comparing competing diagnostic systems is illustrated, new estimation methods based on kernel density estimation are developed, and the statistical performance of the new method is studied.
Abstract: Receiver operating characteristic (ROC) curves are used for summarizing the performance of imperfect diagnostic systems, especially in biomedical research. These curves are also appropriate for summarizing the performance of a discriminant analysis but are under-utilized by statisticians. This article is illustrates the use of these curves for comparing competing diagnostic systems, develops new estimation methods based on kernel density estimation, and studies the statistical performance of the new method. A transform of the ROC curve is further suggested based on the idea of “local population separation.” This graphic is quite generally useful for displaying the differences between two populations. The methods are applied to a dataset comprising the results of seven diagnostics for predicting cancer activity on 353 patients. The distributions of these diagnostics are not well modeled parametrically, and so either completely nonparametric or kernel density estimation seems appropriate. Construct...

Journal ArticleDOI
TL;DR: In this paper, a new method, Discriminant Data Envelopment Analysis of Ratios (DR/DEA), was proposed to rank all the units on the same scale.

Proceedings ArticleDOI
16 Aug 1998
TL;DR: Two enhanced Fisher linear discriminant models (EFM) are introduced in order to improve the generalization ability of the standard FLD based classifiers such as Fisherfaces and Experimental data shows that the EFM models outperform the standardFLD based methods.
Abstract: We introduce two enhanced Fisher linear discriminant (FLD) models (EFM) in order to improve the generalization ability of the standard FLD based classifiers such as Fisherfaces Similar to Fisherfaces, both EFM models apply first principal component analysis (PCA) for dimensionality reduction before proceeding with FLD type of analysis EFM-1 implements the dimensionality reduction with the goal to balance between the need that the selected eigenvalues account for most of the spectral energy of the raw data and the requirement that the eigenvalues of the within-class scatter matrix in the reduced PCA subspace are not too small EFM-2 implements the dimensionality reduction as Fisherfaces do It proceeds with the whitening of the within-class scatter matrix in the reduced PCA subspace and then chooses a small set of features (corresponding to the eigenvectors of the within-class scatter matrix) so that the smaller trailing eigenvalues are not included in further computation of the between-class scatter matrix Experimental data using a large set of faces-1,107 images drawn from 369 subjects and including duplicates acquired at a later time under different illumination-from the FERET database shows that the EFM models outperform the standard FLD based methods

Journal ArticleDOI
TL;DR: The phenomenon of going bankrupt is analyzed qualitatively, and companies are also classified into healthy and bankrupt-prone ones and a modification of the learning vector quantization algorithm to accommodate the Neyman–Pearson classification criterion.

Proceedings ArticleDOI
23 Jun 1998
TL;DR: The rationales behind PCA and LDA and the pros and cons of applying them to pattern classification task are illustrated and the improved performance of this combined approach is demonstrated.
Abstract: In face recognition literature, holistic template matching systems and geometrical local feature based systems have been pursued. In the holistic approach, PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) are popular ones. More recently, the combination of PCA and LDA has been proposed as a superior alternative over pure PCA and LDA. In this paper, we illustrate the rationales behind these methods and the pros and cons of applying them to pattern classification task. A theoretical performance analysis of LDA suggests applying LDA over the principal components from the original signal space or the subspace. The improved performance of this combined approach is demonstrated through experiments conducted on both simulated data and real data.

Proceedings ArticleDOI
16 Aug 1998
TL;DR: It is shown that the amount of dynamic information available to an imposter is important and that forgeries based on paper copies are easier to detect.
Abstract: This paper addresses the problem of online signature verification based on hidden Markov models (HMM). We use a novel type of digitizer tablet and pay special attention to the use of pen-tilt. We investigate the verification reliability based on different forgery types. We compare the discriminative value of the different features based on a linear discriminant analysis (LDA) and show that pen-tilt is important. On the basis of home-improved, over-the-shoulder and professional forgeries, we show that the amount of dynamic information available to an imposter is important and that forgeries based on paper copies are easier to detect. The results obtained with a database of almost 5000 signatures of 51 persons with highly skilled forgeries include equal-error rates between 1% and 1.9%.

Journal ArticleDOI
TL;DR: A new chemometric method based on the selection of the most important variables in discriminant partial least-squares (VS-DPLS) analysis is described, a simple extension of DPLS where a small number of elements in the weight vector w is retained for each factor.
Abstract: Variable selection enhances the understanding and interpretability of multivariate classification models. A new chemometric method based on the selection of the most important variables in discriminant partial least-squares (VS-DPLS) analysis is described. The suggested method is a simple extension of DPLS where a small number of elements in the weight vector w is retained for each factor. The optimal number of DPLS factors is determined by cross-validation. The new algorithm is applied to four different high-dimensional spectral data sets with excellent results. Spectral profiles from Fourier transform infrared spectroscopy and pyrolysis mass spectrometry are used. To investigate the uniqueness of the selected variables an iterative VS-DPLS procedure is performed. At each iteration, the previously found selected variables are removed to see if a new VS-DPLS classification model can be constructed using a different set of variables. In this manner, it is possible to determine regions rather than individual variables that are important for a successful classification.

Journal ArticleDOI
TL;DR: This article reviews the wealth of different pattern recognition methods that have been used for magnetic resonance spectroscopy (MRS) based tumor classification and discusses different approaches in view of practical and theoretical considerations.
Abstract: This article reviews the wealth of different pattern recognition methods that have been used for magnetic resonance spectroscopy (MRS) based tumor classification. The methods have in common that the entire MR spectra is used to develop linear and non-linear classifiers. The following issues are addressed: (i) pre-processing, such as normalization and digitization, (ii) extraction of relevant spectral features by multivariate methods, such as principal component analysis, linear discriminant analysis (LDA), and optimal discriminant vector, and (iii) classification by LDA, cluster analysis and artificial neural networks. Different approaches are compared and discussed in view of practical and theoretical considerations.

Journal ArticleDOI
TL;DR: In this article, an automated method was developed to find the boundaries of geomorphological objects and to extract the objects as groups of aggregated pixels, with the boundaries being breaks of slope on two-dimensional downslope profiles.

Journal ArticleDOI
TL;DR: In order to obtain a wider range of classifiers, five new complexity-control techniques are proposed: target value control, moving of the learning data centre into the origin of coordinates, zero weight initialization, use of an additional negative weight decay term called "anti-regularization", andUse of an exponentially increasing learning step.

Journal ArticleDOI
TL;DR: An environmental information system, with the integration of statistics, GIS, expert systems and environmental models should be established to further the study in environmental geochemistry, as well as to provide decision support.


01 Jan 1998
TL;DR: The final chapter modeled the development of viewpoint invariant responses to faces from visual experience in a biological system by encoding spatio-temporal dependencies.
Abstract: In a task such as face recognition, much of the important information may be contained in the high-order relationships among the image pixels. Representations such as "Eigenfaces" (197) and "Holons" (48) are based on Principal component analysis (PCA), which encodes the correlational structure of the input, but does not address high-order statistical dependencies such as relationships among three or more pixels. Independent component analysis (ICA) is a generalization of PCA which encodes the high-order dependencies in the input in addition to the correlations. Representations for face recognition were developed from the independent components of face images. The ICA representations were superior to PCA for recognizing faces across sessions and changes in expression. ICA was compared to more than eight other image analysis methods on a task of recognizing facial expressions in a project to automate the Facial Action Coding System (62). These methods included estimation of optical flow; representations based on the second-order statistics of the full face images such Eigenfaces (47, 197) local feature analysis (156), and linear discriminant analysis (23); and representations based on the outputs of local filters, such as a Gabor wavelet representations (50, 113) and local PCA (153). The ICA and Gabor wavelet representations achieved the best performance of 96% for classifying 12 facial actions. Relationships between the independent component representation and the Gabor representation are discussed. Temporal redundancy contains information for learning invariances. Different views of a face tend to appear in close temporal proximity as the person changes expression, pose, or moves through the environment. The final chapter modeled the development of viewpoint invariant responses to faces from visual experience in a biological system by encoding spatio-temporal dependencies. The simulations combined temporal smoothing of activity signals with Hebbian learning (72) in a network with both feed-forward connections and a recurrent layer that was a generalization of a Hopfield attractor network. Following training on sequences of graylevel images of faces as they changed pose, multiple views of a given face fell into the same basin of attraction, and the system acquired representations of faces that were approximately viewpoint invariant.

Proceedings ArticleDOI
14 Apr 1998
TL;DR: A scale invariant face detection method which combines higher-order local autocorrelation (HLAC) features extracted from a log-polar transformed image with linear discriminant analysis for "face" and "not face" classification is proposed.
Abstract: This paper proposes a scale invariant face detection method which combines higher-order local autocorrelation (HLAC) features extracted from a log-polar transformed image with linear discriminant analysis for "face" and "not face" classification. Since HLAC features of log-polar images are sensitive to shifts of a face, we utilize this property and develop a face detection method. HLAC features extracted from a log-polar image become scale and rotation invariant because scalings and rotations of a face are expressed as shifts in a log-polar image (coordinate). By combining these features with the linear discriminant analysis which is extended to treat "face" and "not face" classes, a scale invariant face detection system can be realized.

Journal ArticleDOI
TL;DR: In this article, a combination of cluster analysis as the first step and multivariate analysis of variance and discriminant analysis is used to identify similar locations in a river, which can be used to detect sources and dischargers by means of factor analysis.

Journal ArticleDOI
TL;DR: It is shown that the first discriminant coordinates (DC) statistically increase with the ordering of modules, thus improving prediction and prioritization efforts, and separate estimation of the smoothing parameter is shown to be required.
Abstract: Prediction of fault-prone modules provides one way to support software quality engineering through improved scheduling and project control. The primary goal of our research was to develop and refine techniques for early prediction of fault-prone modules. The objective of this paper is to review and improve an approach previously examined in the literature for building prediction models, i.e. principal component analysis (PCA) and discriminant analysis (DA). We present findings of an empirical study at Ericsson Telecom AB for which the previous approach was found inadequate for predicting the most fault-prone modules using software design metrics. Instead of dividing modules into fault-prone and not-fault-prone, modules are categorized into several groups according to the ordered number of faults. It is shown that the first discriminant coordinates (DC) statistically increase with the ordering of modules, thus improving prediction and prioritization efforts. The authors also experienced problems with the smoothing parameter as used previously for DA. To correct this problem and further improve predictability, separate estimation of the smoothing parameter is shown to be required.

Journal ArticleDOI
15 May 1998-Spine
TL;DR: The use of spectral parameters to classify subjects with low back pain from those without appears to have merit and models based on 60% ofmaximum voluntary contraction demonstrated results comparable with those of previous research using 40% and 80% of maximum voluntary contraction.
Abstract: Study Design. An electromyogram procedure using spectral parameters to distinguish subjects with low back pain from those without. Objectives. To add to the growing database on this procedure, to assess the possible overfitting of data in the ciassification model, to determine whether a model based on a contraction level of 60% of maximum voluntary contraction can produce concordance rates similar to those in models based on 40% and 80% of maximum voluntary contraction, and to develop a ciassification model to distinguish subjects with low back pain from those without. of Background Data. Other investigators have published a series of models in which spectral parameters measured during fatiguing contractions from the paraspinai muscles have been able to classify a subject into a low back pain or non-low back pain group with a more than 80% concordance rate. Methods. Subjects with chronic low back pain (N -21) and without (N = 18) performed a series of isometric, fatiguing back extensor contractions in which the median power frequency was measured bilaterally from T9, L3, and L5. A Student's t test was used to determine which parameters would be entered into the ciassification models. Discriminant analysis and logistic regression procedures were used to develop models to classify subjects and were compared for overfitting of data based on the number of input parameters. The logistic regression method used a holdout group (N = 6) for validation. Results. The discriminant analysis selected all 10 input parameters and was believed to overfit the data. Logistic regression selected two parameters and had a concordance rate of 92.4%. Five of the six subjects in the holdout group were correctly classified. Conclusions. The use of spectral parameters to classify subjects with low back pain from those without appears to have merit. Compared with discriminant analysis, logistic regression provided an equally powerful method for classifying these two groups but did not overfit the data. Models based on 60% of maximum voluntary contraction demonstrated results comparable with those at previous research using 40% and 80% of maximum voluntary contraction.

Proceedings ArticleDOI
12 May 1998
TL;DR: This paper presents a novel technique for the tracking and extraction of features from lips for the purpose of speaker identification, where syntactic information is derived from chromatic information in the lip region.
Abstract: This paper presents a novel technique for the tracking and extraction of features from lips for the purpose of speaker identification. In noisy or other adverse conditions, identification performance via the speech signal can significantly reduce, hence additional information which can complement the speech signal is of particular interest. In our system, syntactic information is derived from chromatic information in the lip region. A model of the lip contour is formed directly from the syntactic information, with no minimization procedure required to refine estimates. Colour features are then extracted from the lips via profiles taken around the lip contour. Further improvement in lip features is obtained via linear discriminant analysis (LDA). Speaker models are built from the lip features based on the Gaussian mixture model (GMM). Identification experiments are performed on the M2VTS database, with encouraging results.

Journal ArticleDOI
TL;DR: In this article, the authors quantify temporomandibular joint disk-slice information produced by magnetic resonance imaging by means of a stepwise discriminant analysis, which revealed that all three quantitative variables were descriptive and discriminant for grouping slice data into pre-established subjective categories.
Abstract: The purpose of this study was to quantify temporomandibular joint disk-slice information produced by magnetic resonance imaging by means of a stepwise discriminant analysis. One hundred ninety-four adolescents consented to magnetic resonance imaging evaluation of their temporomandibular joints. Sagittal magnetic resonance imaging slices of each joint were assigned to one of six subjective categories of disk position by an experienced maxillofacial radiologist. Standardized reference planes transferred to each magnetic resonance image from corresponding lateral cephalometric radiographics facilitated the measurement of disk length and disk displacement and the computation of ratio values of these measurements. Discriminant analysis revealed that all three quantitative variables were descriptive and discriminant for grouping slice data into pre-established subjective categories. Cross-validation and misclassification error calculations showed a 69.3% agreement between subjective and discriminant classification. Therefore quantification of disk displacement can be used in place of subjective evaluation. In addition, discriminant analysis disclosed a reduction in disk length associated with increased severity of disk displacement.

Book
01 Jan 1998
TL;DR: This book discusses chemicalometric methods for Classification Problems, class Modelling, and Discriminant Analysis, as well as Win-DAS, a next generation of statistical analysis system for classification problems.
Abstract: Chemometric Methods for Classification Problems. Installing Win-DAS. Getting Started with Win-DAS. Discriminant Analysis. Class Modelling. Case Studies. Appendices. Index.