Logistic discriminant analysis
read more
Citations
Computerized two-lead resting ECG analysis for the detection of coronary artery stenosis after coronary revascularization.
Computerized two-lead resting ECG analysis for the detection of coronary artery stenosis.
Enhanced system and method for conducting PCA analysis on data signals
New Wealth, New Wisdom: Understanding the Changing Profile of Countries Creating Sovereign Wealth Funds
Discriminant Kernels derived from the optimum nonlinear discriminant analysis
References
Generalized Linear Models
Fisher discriminant analysis with kernels
Generalized Discriminant Analysis Using a Kernel Approach
Related Papers (5)
Frequently Asked Questions (11)
Q2. How can kernel trick be used to estimate the posterior probabilities?
Since the MLR can be extend to non-linear by using kernel trick, it is natural to extend the LgDA to the kernel logistic discriminant analysis by using kernel MLR as the estimator of the posterior probabilities.
Q3. What is the reason why the LgDA is the natural extension of LDA?
Since linear discriminant analysis (LDA) can be regarded as the linear approximation of the ONDA through the linear approximations of the Bayesian posterior probabilities, the proposed LgDA can be regarded as the natural extension of LDA substituting the generalized linear model for the linear model in LDA.
Q4. How can LgDA be used to estimate the Bayesian posterior probabilities?
By modifying the outputs of the linear predictor by the link function, MLR can naturally estimate the Bayesian posterior probabilities.
Q5. What is the criterion for calculating the coefficient matrix of the input vector x?
Then LDA constructs a dimension reducing linear mapping from the input feature vector x to a new feature vector yxAy Τ= , (1)where A = [aij] is the coefficient matrix.
Q6. What are the regularization methods of MLR?
The regularization methods of MLR were proposed such as the shrinkage method (regularized MLR) and locality preserving multi-nominal logistic regression (LPMLR) [10].
Q7. What is the criterion for the discriminant mapping?
Similarly to the LDA, the ONDA constructs the dimension reducing nonlinear mapping which maximizes the discriminant criterion J. The optimal non-linear discriminant mapping is given by( ) == Kk kkCP 1 uxy , (5)where ( )xkCP is the Bayesian posterior probability of the class Ck given the input x.
Q8. What is the block vector with elements?
20)The vector Z is the block vector with elements( )kkK j j kjk tyRz −−= − = 1 1 . (21)Equation (19) is repeated until it converges
Q9. how does a linear approximation of the Bayesian posteriori probabilities work?
Let( ) )(0)( kkk bCL += xbx (9) be a linear approximation of the Bayesian posteriori probabilities which minimizes the mean square error as follows:( ) ( ){ } ( )−= xxxx dpCLCP kk 22ε .
Q10. what is the block matrix of the regularized MLR?
the block Hessian matrix H of the regularized MLR is defined as follows:.ˆˆ 2ˆˆ,ww11111111+ =+ ==−−−−otherwise kjifjkjk jk))(K(K)(K)(KIXRX IXRX HHHHH HTT(24)Where The authoris the identity matrix.
Q11. What is the definition of the nonlinear discriminant mapping?
(27)The representative vectors of each class ku~ are determined by the following eigen equationΛ=Γ ~~~~~ UPU , (28)where the matrices such as P~ , U~ and Λ~ are defined as follows:[ ]( ) ( ).~,,~~ ,)(~,),(~~,~,,~~111KKKdiagCPCPdiagλλ=Λ== ΤPuuU(29)If the outputs of the ordinal MLR or the regularized MLR can give sufficiently good approximation to the Bayesian posterior probabilities, it is expected that the nonlinear discriminant mapping defined by (27) constructs the good approximation of the ultimate nonlinear discriminant mapping ONDA in terms of the discriminant criterion.