scispace - formally typeset
Search or ask a question

Showing papers on "Bhattacharyya distance published in 2019"


Journal ArticleDOI
TL;DR: The proposed FDM based feature selection algorithm using holdout technique provides 80% and 78.57% accuracies for the 12 and 24 features AAR datasets respectively, which are even better than the performances obtained while using the original feature-sets (without using any feature selection technique).

81 citations


Journal ArticleDOI
TL;DR: This work develops an approximate Bayesian computation model updating framework, in which the Bhattacharyya distance is fully embedded, and develops the approximate likelihood function built upon the concept of the distance in a two-step Bayesian updating framework.

63 citations


Journal ArticleDOI
TL;DR: This work proposes a novel graph kernel based link prediction method, which predicts links by comparing user similarity via signed social network’s structural information, and has significantly higher link prediction accuracy and F1-score than existing works.

57 citations


Journal ArticleDOI
TL;DR: An automatic apnea detection scheme is proposed using single lead electroencephalography (EEG) signal to discriminate apnea patients and healthy subjects as well as to deal with the difficult task of classifying apnea and nonapnea events of an apnea patient.
Abstract: Sleep apnea, a serious sleep disorder affecting a large population, causes disruptions in breathing during sleep. In this paper, an automatic apnea detection scheme is proposed using single lead electroencephalography (EEG) signal to discriminate apnea patients and healthy subjects as well as to deal with the difficult task of classifying apnea and nonapnea events of an apnea patient. A unique multiband subframe based feature extraction scheme is developed to capture the feature variation pattern within a frame of EEG data, which is shown to exhibit significantly different characteristics in apnea and nonapnea frames. Such within-frame feature variation can be better represented by some statistical measures and characteristic probability density functions. It is found that use of Rician model parameters along with some statistical measures can offer very robust feature qualities in terms of standard performance criteria, such as Bhattacharyya distance and geometric separability index. For the purpose of classification, proposed features are used in K Nearest Neighbor classifier. From extensive experimentations and analysis on three different publicly available databases it is found that the proposed method offers superior classification performance in terms of sensitivity, specificity, and accuracy.

44 citations


Journal ArticleDOI
TL;DR: A coupled variational mode decomposition (VMD) and whale optimization algorithm (WOA) for noise reduction in lidar signals is proposed and demonstrated completely and the combined parameters of optimal VMD parameters of decomposition mode number K and quadratic penalty α were obtained.
Abstract: Although lidar is a powerful active remote sensing technology, lidar echo signals are easily contaminated by noise, particularly in strong background light, which severely affects the retrieval accuracy and the effective detection range of the lidar system. In this study, a coupled variational mode decomposition (VMD) and whale optimization algorithm (WOA) for noise reduction in lidar signals is proposed and demonstrated completely. The combination of optimal VMD parameters of decomposition mode number K and quadratic penalty α was obtained by using the WOA and was critical in acquiring satisfactory analysis results for VMD denoising technology. Then, the Bhattacharyya distance was applied to identify the relevant modes, which were reconstructed to achieve noise filtering. Simulation results show that the performance of the proposed VMD-WOA method is superior to that of wavelet transform, empirical mode decomposition, and its variations. Experimentally, this method was successfully used to filter a lidar echo signal. The signal-to-noise ratio of the denoised signal was increased to 23.92 dB, and the detection range was extended from 6 to 10 km.

40 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a stochastic sensitivity analysis framework using the Bhattacharyya distance as a novel uncertainty quantification metric, which is used to provide a quantitative description of the P-box in a two-level procedure for both aleatory and epistemic uncertainties.

31 citations


Journal ArticleDOI
TL;DR: A variational framework to track the motion of moving objects in surgery videos and a robust energy functional based on Bhattacharyya coefficient to match the target region in the first frame of the input sequence with the subsequent frames using a similarity metric is developed.
Abstract: Surgical procedures such as laparoscopic and robotic surgeries are popular since they are invasive in nature and use miniaturized surgical instruments for small incisions. Tracking of the instruments (graspers, needle drivers) and field of view from the stereoscopic camera during surgery could further help the surgeons to remain focussed and reduce the probability of committing any mistakes. Tracking is usually preferred in computerized video surveillance, traffic monitoring, military surveillance system, and vehicle navigation. Despite the numerous efforts over the last few years, object tracking still remains an open research problem, mainly due to motion blur, image noise, lack of image texture, and occlusion. Most of the existing object tracking methods are time-consuming and less accurate when the input video contains high volume of information and more number of instruments. This paper presents a variational framework to track the motion of moving objects in surgery videos. The key contributions are as follows: (1) A denoising method using stochastic resonance in maximal overlap discrete wavelet transform is proposed and (2) a robust energy functional based on Bhattacharyya coefficient to match the target region in the first frame of the input sequence with the subsequent frames using a similarity metric is developed. A modified affine transformation-based registration is used to estimate the motion of the features following an active contour-based segmentation method to converge the contour resulted from the registration process. The proposed method has been implemented on publicly available databases; the results are found satisfactory. Overlap index (OI) is used to evaluate the tracking performance, and the maximum OI is found to be 76% and 88% on private data and public data sequences.

30 citations


Journal ArticleDOI
TL;DR: A strategy for constructing, learning and inferring the HMM for gene selection, which led to higher performance in cancer classification and a powerful procedure for combining different feature selection methods, which can be used for more robust classification in real world applications.

26 citations


Journal ArticleDOI
TL;DR: In this paper, a linear discriminant analysis (LDA) criterion via the Bhattacharyya error bound estimation based on a novel L1-norm (L1BLDA) and L2-norm(L2BLDA).
Abstract: In this paper, we propose a novel linear discriminant analysis (LDA) criterion via the Bhattacharyya error bound estimation based on a novel L1-norm (L1BLDA) and L2-norm (L2BLDA). Both L1BLDA and L2BLDA maximize the between-class scatters which are measured by the weighted pairwise distances of class means and meanwhile minimize the within-class scatters under the L1-norm and L2-norm, respectively. The proposed models can avoid the small sample size (SSS) problem and have no rank limit that may encounter in LDA. It is worth mentioning that, the employment of L1-norm gives a robust performance of L1BLDA, and L1BLDA is solved through an effective non-greedy alternating direction method of multipliers (ADMM), where all the projection vectors can be obtained once for all. In addition, the weighting constants of L1BLDA and L2BLDA between the between-class and within-class terms are determined by the involved data, which makes our L1BLDA and L2BLDA more adaptive. The experimental results on both benchmark data sets as well as the handwritten digit databases demonstrate the effectiveness of the proposed methods.

21 citations


Journal ArticleDOI
TL;DR: The proposed procedure is based on a k-means algorithm in which the distance between the curves is measured with a metric that generalizes the Mahalanobis distance in Hilbert spaces, considering the correlation and the variability along all the components of the functional data.
Abstract: This paper proposes a clustering procedure for samples of multivariate functions in $$(L^2(I))^{J}$$ , with $$J\ge 1$$ . This method is based on a k-means algorithm in which the distance between the curves is measured with a metric that generalizes the Mahalanobis distance in Hilbert spaces, considering the correlation and the variability along all the components of the functional data. The proposed procedure has been studied in simulation and compared with the k-means based on other distances typically adopted for clustering multivariate functional data. In these simulations, it is shown that the k-means algorithm with the generalized Mahalanobis distance provides the best clustering performances, both in terms of mean and standard deviation of the number of misclassified curves. Finally, the proposed method has been applied to two case studies, concerning ECG signals and growth curves, where the results obtained in simulation are confirmed and strengthened.

19 citations


Journal ArticleDOI
TL;DR: A novel moving object tracking algorithm based on the modified flower pollination algorithm (MFPA) is proposed, which outperforms others in terms of efficiency and accuracy.

Journal ArticleDOI
TL;DR: As an application for information structures, measures of uncertainty for an IPSIS are investigated, and to evaluate the performance of the proposed measures, effectiveness analysis is given from the angle of statistics.
Abstract: An information system is a database that represents relationships between objects and attributes. A set-valued information system is the generalized model of a single-valued information system. A set-value information system that contains probability distributions and missing values is called an incomplete probability set-value information system (IPSIS). Uncertainty measure is an effective tool for evaluation. This paper explores information structures and uncertainty measures in an IPSIS. According to the Bhattacharyya distance, the distance between two objects in a given subsystem of an IPSIS is first proposed. Then, the tolerance relation on an object set, induced by a probability set-valued information system by using this distance, is obtained. Next, the information structure of this subsystem is introduced by a set vector. Moreover, the dependence between two information structures is studied by using the inclusion degree. Finally, as an application for information structures, measures of uncertainty for an IPSIS are investigated, and to evaluate the performance of the proposed measures, effectiveness analysis is given from the angle of statistics. These results will be helpful for establishing a framework of granular computing and understanding the essence of uncertainty in an IPSIS.

Journal ArticleDOI
TL;DR: The problem of discriminating 1D and 2D genuine signals from signals that have been downscaled is studied with the goal of quantifying the statistical distinguishability between these two hypotheses.
Abstract: The detection of rescaling operations represents an important task in multimedia forensics. While many effective heuristics have been proposed, there is no theory on the forensic detectability revealing the conditions of more or less reliable detection. We study the problem of discriminating 1D and 2D genuine signals from signals that have been downscaled with the goal of quantifying the statistical distinguishability between these two hypotheses. This is done by assuming known signal models and deriving the expressions of statistical distances that are linked to the hypothesis testing theory, namely, the symmetrized form of Kullback–Leibler divergence known as the Jeffreys divergence, and the Bhattacharyya divergence. The analysis is performed for varying parameters of both the genuine signal model (variance and one-step correlation) and the rescaling process (rescaling factor, interpolation kernel, grid shift, and anti-alias filter), thus allowing us to reveal the insights on their influence and interplay. In addition to the signal itself, we consider the signal transformations (prefilter and covariance matrix estimators) that are often involved in practical rescaling detectors, showing that they yield similar results in terms of distinguishability. Numerical tests on synthetic and real signals confirm the main observations from the theoretical analysis.

Proceedings ArticleDOI
05 Jun 2019
TL;DR: This work proposes the first deep learning approach to predict Satisfied User Ratio curves for a lossy image compression scheme, using a Siamese Convolutional Neural Network, feature pooling, fully connected regression-head, and transfer learning.
Abstract: The Satisfied User Ratio (SUR) curve for a lossy image compression scheme, e.g., JPEG, characterizes the probability distribution of the Just Noticeable Difference (JND) level, the smallest distortion level that can be perceived by a subject. We propose the first deep learning approach to predict such SUR curves. Instead of the direct approach of regressing the SUR curve itself for a given reference image, our model is trained on pairs of images, original and compressed. Relying on a Siamese Convolutional Neural Network (CNN), feature pooling, a fully connected regression-head, and transfer learning, we achieved a good prediction performance. Experiments on the MCL-JCI dataset showed a mean Bhattacharyya distance between the predicted and the original JND distributions of only 0.072.

Journal ArticleDOI
TL;DR: A new method for detecting texts in blurred/non-blurred images by estimating degree of blur in images based on contrast variations in neighbor pixels and a low pass filter, which results in candidate pixels for deblurring.
Abstract: Text detection in video/images is challenging due to the presence of multiple blur caused by defocus and motion. In this paper, we present a new method for detecting texts in blurred/non-blurred images. Unlike the existing methods that use deblurring or classifiers, the proposed method estimates degree of blur in images based on contrast variations in neighbor pixels and a low pass filter, which results in candidate pixels for deblurring. We consider gradient values of each pixel as the weight for the degree of blur. The proposed method then performs K-means clustering on weighted values of candidate pixels to get text candidates irrespective of blur types. Next, Bhattacharyya distance is used to extract symmetry property of texts to remove false text candidates, which provides text components. Further, the proposed method fixes bounding box for each text component based on the nearest neighbor criteria and direction of the text component. Experimental results on defocus, motion, non-blurred images and standard datasets of curved text show that the proposed method outperforms the existing methods.

Journal ArticleDOI
TL;DR: Results show the Bhattacharyya distance achieves the best detection result among all investigated metrics, and comparison with state-of-the-arts demonstrates the effectiveness of the method in real applications.
Abstract: Automated change-point detection of EEG signals is becoming essential for the monitoring of health behaviors and health status in a wide range of clinical applications. This paper presents a structural time-series analysis to capture and characterize the dynamic behavior of EEG signals, and develops a method to detect the EEG change points. For a given EEG signal, the proposed method is operated as follows: 1) a sub-band pass filter is fist designed to capture those frequency components that can characterize the dynamic behavior of the data, and the so-called power spectrum is extracted as the EEG features; 2) together with a sliding-window technique, an automatic `segment-to-segment' analysis of EEG signal, is developed with a null hypothesis testing for decision making. In particular, the main challenge of the proposed method is to design an appropriate distance metric that is compatible with our considered data/problem. To achieve this end, we first collect a variety of metrics from other areas that would be potentially available for our problem, and then compare them for the considered EEG change point detection. Experiments are conducted on two different data sets. Results show the Bhattacharyya distance achieves the best detection result among all investigated metrics. Meanwhile, comparison with state-of-the-arts demonstrates the effectiveness of the method in real applications.

Journal ArticleDOI
TL;DR: A novel methodology based on the marriage between the Bhattacharyya distance and the Johnson Lindenstrauss Lemma, a technique for dimension reduction, is illustrated, providing a simple yet powerful tool that allows comparisons between data-sets representing any two distributions.
Abstract: We develop a novel methodology based on the marriage between the Bhattacharyya distance, a measure of similarity across distributions of random variables, and the Johnson–Lindenstrauss Lemma, a technique for dimension reduction The resulting technique is a simple yet powerful tool that allows comparisons between data-sets representing any two distributions The degree to which different entities, (markets, universities, hospitals, cities, groups of securities, etc), have different distance measures of their corresponding distributions tells us the extent to which they are different, aiding participants looking for diversification or looking for more of the same thing We demonstrate a relationship between covariance and distance measures based on a generic extension of Stein’s Lemma We consider an asset pricing application and then briefly discuss how this methodology lends itself to numerous market–structure studies and even applications outside the realm of finance / social sciences by illustrating a biological application We provide numerical illustrations using security prices, volumes and volatilities of both these variables from six different countries

Journal ArticleDOI
TL;DR: The effectiveness of the proposed hierarchical region merging method for partitioning synthetic aperture radar (SAR) image into un-overlapping scene area is demonstrated by comparing it qualitatively and quantitatively with several state-of-the-art methods.
Abstract: In this paper, a hierarchical region merging method is proposed for partitioning synthetic aperture radar (SAR) image into un-overlapping scene area, such as forest regions, urban regions, agricultural regions, and so on. The proposed method mainly consists of two steps: initial over-segmentation and hierarchical regions merging. The over-segmentation uses the watershed transform to the thresholded Bhattacharyya-coefficient-based edge strength map (BESM), and the hierarchical regions merging applies a new region merging cost weighted by a gradually increasing orientated edge strength penalty. There is a defect that the ratio-based edge detector widely used in homogeneous SAR image fails to distinguish the transitions between uniform and texture regions in high spatial resolution SAR image, and yields an initial over-segmentation result with some regions straddling multiple uniform or texture areas. To overcome this, the Bhattacharyya coefficient is used to replace the ratio-based edge detector for extracting the ESM of a SAR image by using a bi-rectangle-window configuration. Multi-scale windows are utilized to capture additional edge information. A new region merging cost is proposed based on the Kuiper’s distance, weighted by a new gradually increasing orientated edge strength penalty term. The hierarchical region merging criterion is obtained with the increasing of the strength of the edge penalty. The effectiveness of the proposed method is demonstrated by comparing it qualitatively and quantitatively with several state-of-the-art methods.

Journal ArticleDOI
Hongmin Gao1, Yang Yao1, Xiaoke Zhang1, Chenming Li1, Yang Qi1, Yongchang Wang1 
16 Mar 2019-Sensors
TL;DR: Experimental results reveal that compared with the dimensionality reduction method, which only uses information entropy or Bhattacharyya distance as the evaluation criterion, the proposed method achieves global optimum more easily, and then obtains a better band combination and possess higher classification accuracy.
Abstract: Information entropy and interclass separability are adopted as the evaluation criteria of dimension reduction for hyperspectral remote sensor data. However, it is rather single-faceted to simply use either information entropy or interclass separability as evaluation criteria, and will lead to a single-target problem. In this case, the chosen optimal band combination may be unfavorable for the improvement of follow-up classification accuracy. Thus, in this work, inter-band correlation is considered as the premise, and information entropy and interclass separability are synthesized as the evaluation criterion of dimension reduction. The multi-objective particle swarm optimization algorithm is easy to implement and characterized by rapid convergence. It is adopted to search for the optimal band combination. In addition, game theory is also introduced to dimension reduction to coordinate potential conflicts when both information entropy and interclass separability are used to search for the optimal band combination. Experimental results reveal that compared with the dimensionality reduction method, which only uses information entropy or Bhattacharyya distance as the evaluation criterion, and the method combining multiple criterions into one by weighting, the proposed method achieves global optimum more easily, and then obtains a better band combination and possess higher classification accuracy.

Journal ArticleDOI
TL;DR: Quantitative results on both simulated and real ultrasound images show the effectiveness of the proposed non-local means filter using Bhattacharyya distance compared to other well-known methods.
Abstract: Speckle, a multiplicative noise, is an inherent property of ultrasound imaging It reduces the contrast and resolution of the ultrasound images Thus, it creates a negative effect on image interpretation and diagnostic tasks In this paper, a modified non-local means filter using Bhattacharyya distance is proposed In the non-local mean, noise free pixel is estimated as a weighted mean of image pixels, where weights are calculated according to the similarity between image patches Similarity between the patches is measured by comparing pixel intensities In this work, instead of comparing pixel intensities for measuring similarities, blocks are used for measuring similarities based on Bhattacharyya distance Quantitative results on both simulated and real ultrasound images show the effectiveness of the proposed method compared to other well-known methods

Proceedings ArticleDOI
01 Sep 2019
TL;DR: Jeffries-Matusita (JM) distance improves Bhattacharya distance by normalizing it between 0 and 2, which can provide a good intuition on how good a dataset is for classification and point out the need of or lack of further feature collection.
Abstract: Feature selection is one of the most important preprocessing steps in Machine Learning. This can be broadly divided into search based methods and ranking based methods. The ranking based methods are very popular because they need much lesser computational power. There can be many different ways to rank the features. One of the ways to measure effectiveness of a feature is by evaluating its ability to separate the classes involved. These interclass Separability based measures can be directly used as a feature ranking tool for binary classification problems. Bhattacharya Distance which is the most popular among them has been used majorly in a recursive setup to select good quality feature subsets. Jeffries-Matusita (JM) distance improves Bhattacharya distance by normalizing it between 0 and 2. In this paper, we have ranked the features based on JM distance. The results are comparable with mutual information, Relief and Chi Squared based measures as per experiments conducted over 24 public datasets but in much lesser time. JM distance also provide some intuition about the dataset prior to any feature selection or machine learning algorithm. A comparison has been done on classification accuracy and JM scores of these datasets, which can provide a good intuition on how good a dataset is for classification and point out the need of or lack of further feature collection.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: A hierarchical clustering framework for clustering categorical data based on Multinomial and Bernoulli mixture models for pattern recognition applications where the features are binary or integer-valued demand extending research efforts to such data types.
Abstract: Agglomerative hierarchical clustering methods based on Gaussian probability models have recently shown to be efficient in different applications. However, the emerging of pattern recognition applications where the features are binary or integer-valued demand extending research efforts to such data types. This paper proposes a hierarchical clustering framework for clustering categorical data based on Multinomial and Bernoulli mixture models. We have compared two widely used density-based distances, namely; Bhattacharyya and KullbackLeibler. The merits of our proposed framework have been shown through extensive experiments on clustering text and images using the bag of visual words model.

Journal ArticleDOI
TL;DR: The bit-channel ordering for a given code block length is used to generate additional bit- channel ordering relationships for larger block lengths, generalizing previously known POs and showing the threshold behavior of the Bhattacharyya parameters of some bit-channels by approximating the threshold values.
Abstract: We study partial orders (POs) for the synthesized bit-channels of polar codes. First, we give an alternative proof of an existing PO for bit-channels with the same Hamming weight and use the underlying idea to extend the bit-channel ordering to some additional cases. In particular, the bit-channel ordering for a given code block length is used to generate additional bit-channel ordering relationships for larger block lengths, generalizing previously known POs. Next, we consider POs especially for the binary erasure channel (BEC). We identify a symmetry property of the Bhattacharyya parameters of complementary bit-channel pairs on the BEC and provide a condition for the alignment of polarized sets of bit-channels for the BEC and general binary-input memoryless symmetric (BMS) channels. Numerical examples and further properties about the POs for the bit-channels with different Hamming weights are provided to illustrate the new POs. The bit-channels with universal ordering positions, which are independent of the channel erasure probability, are verified for all of the code block lengths. Finally, we show the threshold behavior of the Bhattacharyya parameters of some bit-channels by approximating the threshold values. The corresponding value for a bit-channel can be used to determine whether it is good or bad when the underlying channel is known.

Journal ArticleDOI
TL;DR: A modular framework for accurate prostate volume segmentation in TRUS is presented, broadening the range of available strategies to tackle this open problem.
Abstract: Objective : We present a new hybrid edge and region-based parametric deformable model, or active surface , for prostate volume segmentation in transrectal ultrasound (TRUS) images. Methods : Our contribution is threefold. First, we develop a new edge detector derived from the radial bas-relief approach, allowing for better scalar prostate edge detection in low contrast configurations. Second, we combine an edge-based force derived from the proposed edge detector with a new region-based force driven by the Bhattacharyya gradient flow and adapted to the case of parametric active surfaces. Finally, we develop a quasi-automatic initialization technique for deformable models by analyzing the profiles of the proposed edge detector response radially to obtain initial landmark points toward which an initial surface model is warped. Results : We validate our method on a set of 36 TRUS images for which manual delineations were performed by two expert radiation oncologists, using a wide variety of quantitative metrics. The proposed hybrid model achieved state-of-the-art segmentation accuracy. Conclusion : Results demonstrate the interest of the proposed hybrid framework for accurate prostate volume segmentation. Significance : This paper presents a modular framework for accurate prostate volume segmentation in TRUS, broadening the range of available strategies to tackle this open problem.

Journal ArticleDOI
TL;DR: The proposed system with HSH provides better performance than using Spherical Harmonics for recognising the face of the target and the performance are analysed with existing techniques for recognition of face.
Abstract: The task of face recognition in real-world scenarios is still a challenging one. There exist many techniques for the recognition of faces in videos. The recognition task may be computationally easier, but is susceptible to pose variations, lighting conditions etc. This paper focuses on recognition of faces from multi-view videos using the combination of particle filtering with Immune Genetic Algorithm (IGA) and HSH, which is insensitive to pose variations. Particle filtering along with the IGA efficiently track the target using immune system mechanism and then the recognition phases are carried out using HSH. For recognition of video, the ensemble feature similarity calculated which can be measured with the limiting Bhattacharyya distance of features in the Reproducing Kernel Hilbert space. The proposed system with HSH provides better performance than using Spherical Harmonics (SH) for recognising the face of the target and the performance are analysed with existing techniques for recognition of face.

Journal ArticleDOI
TL;DR: A new interactive image segmentation method based on superpixels, must-link, cannot-link constraints and improved normalised cuts is proposed, which greatly improves the accuracy of segmentation.
Abstract: Effective and efficient image segmentation is an important task in computer vision. As the full-automatic image segmentation is usually difficult to segment the natural image, it is an excellent solution to use interactive schemes. Here, to overcome the defects of SSNCut in its low quality and speed, the authors proposed a new interactive image segmentation method based on superpixels, must-link, cannot-link constraints and improved normalised cuts. The main contribution of their work is as follows: first, the similarity between two superpixel regions is calculated using Bhattacharyya distance. Second, they adaptively modify the weights of must-link and cannot-link constraints. Compared to SSNCut, their method greatly improves the accuracy of segmentation. Comparative experiments on open datasets show that the proposed method can get better results compared with SSNCut, GrabCut in one cut, interactive segmentation using binary partition tree, interactive graph cut, seed region growing, and simple interactive object extraction.

Journal ArticleDOI
01 Apr 2019
TL;DR: In this article, a general construction is given based on defining a directional derivative of a function $$\phi $$ from one distribution to the other whose concavity or strict concvity influences the properties of the resulting divergence.
Abstract: In previous work the authors defined the k-th order simplicial distance between probability distributions which arises naturally from a measure of dispersion based on the squared volume of random simplices of dimension k. This theory is embedded in the wider theory of divergences and distances between distributions which includes Kullback–Leibler, Jensen–Shannon, Jeffreys–Bregman divergence and Bhattacharyya distance. A general construction is given based on defining a directional derivative of a function $$\phi $$ from one distribution to the other whose concavity or strict concavity influences the properties of the resulting divergence. For the normal distribution these divergences can be expressed as matrix formula for the (multivariate) means and covariances. Optimal experimental design criteria contribute a range of functionals applied to non-negative, or positive definite, information matrices. Not all can distinguish normal distributions but sufficient conditions are given. The k-th order simplicial distance is revisited from this aspect and the results are used to test empirically the identity of means and covariances.


Journal ArticleDOI
TL;DR: Experimental results show that the integration of the proposed pixel-wise similarity in dense image-descriptor construction yields improved peak signal to noise ratio performance and higher tracking accuracy in the multi-layered motion estimation problem and the proposed similarity measures give the best performance in terms of all quantitative measurements in the unsupervised superpixel-based image segmentation of the MSRC and BSD300 datasets.
Abstract: This paper proposes a novel probabilistic representation of color image (PRCI) pixels and investigates its applications to similarity construction in motion estimation and image segmentation problems. The PRCI explores the mixture representation of the input image(s) as prior information and describes a given color pixel in terms of its membership in the mixture. Such representation greatly simplifies the estimation of the probability density function from limited observations and allows us to derive a new probabilistic pixel-wise similarity measure based on the continuous domain Bhattacharyya coefficient. This yields a convenient expression of the similarity measure in terms of the pixel memberships. Furthermore, this pixel-wise similarity is extended to measure the similarity between two image regions. The usefulness of the proposed pixel/region-wise similarities is demonstrated by incorporating them, respectively, in a dense image descriptor-based multi-layered motion estimation problem and an unsupervised image segmentation problem. Experimental results show that: 1) the integration of the proposed pixel-wise similarity in dense image-descriptor construction yields improved peak signal to noise ratio performance and higher tracking accuracy in the multi-layered motion estimation problem and 2) the proposed similarity measures give the best performance in terms of all quantitative measurements in the unsupervised superpixel-based image segmentation of the MSRC and BSD300 datasets.

Journal ArticleDOI
TL;DR: Bhattacharyya bounds of classification error probability between two species with Raman and binary compressed Raman measurements limited by Poisson photon noise are analyzed and lead to a simple expression of a minimal number of photons necessary to upper bound the optimal classification error probabilities.
Abstract: Bhattacharyya bounds of classification error probability between two species with Raman and binary compressed Raman measurements limited by Poisson photon noise are analyzed. They exhibit the relevant physical parameters and lead to a simple expression of a minimal number of photons necessary to upper bound the optimal classification error probability.