scispace - formally typeset
Search or ask a question

Showing papers on "Bhattacharyya distance published in 1976"



Journal ArticleDOI
TL;DR: In this article, conditions for the attainment of the Hammersley-Chapman-Robbins bound for the variance of an unbiased estimator, in both regular and non-regular cases, are given.
Abstract: Conditions are given for the attainment of the Hammersley-Chapman-Robbins bound for the variance of an unbiased estimator, in both regular and nonregular cases. Comparisons are made between this bound and the Bhattacharyya system of bounds for a wide class of distributions and parametric functions. Sufficient conditions are provided to determine when one bound is sharper than the other one.

15 citations


Journal ArticleDOI
TL;DR: In this paper, a procedure for calculating a kxn rank k matrix B for data compression using the Bhattacharyya bound on the probability of error and an iterative construction using Householder transformations was developed.
Abstract: We develop a procedure for calculating a kxn rank k matrix B for data compression using the Bhattacharyya bound on the probability of error and an iterative construction using Householder transformations. Two sets of remotely sensed agricultural data are used to demonstrate the application of the procedure. The results of the applications give some indication of the extent to which the Bhattacharyya bound on the probability of error is affected by such transformations for multivariate normal populations.

11 citations


Journal ArticleDOI
01 Dec 1976-Metrika
TL;DR: In this article, a method based on density estimations by orthogonal expansions was proposed for the estimation of the functional Δ (f)=∫f2(x)dx Bhattacharyya andRoussas [1969] proposed an estimator based on the kernel technique for density estimation.
Abstract: For the estimation of the functional Δ (f)=∫f2(x)dx Bhattacharyya andRoussas [1969] proposed an estimator based on the kernel-technique for density estimation. This paper describes a method, which rests on density estimations by orthogonal expansions. In the main we show the considered estimator to be consistent in the quadratic mean.

2 citations


20 Apr 1976
TL;DR: Two algorithms were developed at Rice University for optimal linear feature extraction based on the minimization of the risk (probability) of misclassification under the assumption that the class conditional probability density functions are Gaussian.
Abstract: : Two algorithms were developed at Rice University for optimal linear feature extraction based on the minimization of the risk (probability) of misclassification under the assumption that the class conditional probability density functions are Gaussian. In the present report, the second algorithm is described which is used when the dimension of the feature space is greater than one. Numerical results obtained from the application of the present algorithm to remotely sensed data from the Purdue C1 flight line are mentioned. Brief comparisons are made of these results with those obtained using a feature selection technique based on maximizing the Bhattacharyya distance. For the example considered, a significant improvement in classification is obtained by the present technique.

2 citations


01 Aug 1976
TL;DR: The total Bhattacharyya distance is calculated using all N channels using the output of this subroutine to construct the B matrix using one or two Householder transformations.
Abstract: The total Bhattacharyya distance is calculated using all N channels. The output of this subroutine is used to construct the B matrix using one or two Householder transformations. FORTRAN calling sequences are given.

1 citations