scispace - formally typeset
Search or ask a question

Showing papers on "Euclidean distance published in 2013"


Proceedings ArticleDOI
01 Dec 2013
TL;DR: This paper proposes fast super-resolution methods while making no compromise on quality, and supports the use of sparse learned dictionaries in combination with neighbor embedding methods, and proposes the anchored neighborhood regression.
Abstract: Recently there have been significant advances in image up scaling or image super-resolution based on a dictionary of low and high resolution exemplars. The running time of the methods is often ignored despite the fact that it is a critical factor for real applications. This paper proposes fast super-resolution methods while making no compromise on quality. First, we support the use of sparse learned dictionaries in combination with neighbor embedding methods. In this case, the nearest neighbors are computed using the correlation with the dictionary atoms rather than the Euclidean distance. Moreover, we show that most of the current approaches reach top performance for the right parameters. Second, we show that using global collaborative coding has considerable speed advantages, reducing the super-resolution mapping to a precomputed projective matrix. Third, we propose the anchored neighborhood regression. That is to anchor the neighborhood embedding of a low resolution patch to the nearest atom in the dictionary and to precompute the corresponding embedding matrix. These proposals are contrasted with current state-of-the-art methods on standard images. We obtain similar or improved quality and one or two orders of magnitude speed improvements.

1,276 citations


Journal ArticleDOI
TL;DR: The results obtained by implementing the k-means algorithm using three different metrics Euclidean, Manhattan and Minkowski distance metrics along with the comparative study of results of basic k-Means algorithm which is implemented through Euclidian distance metric for two- dimensional data are discussed.
Abstract: power of k-means algorithm is due to its computational efficiency and the nature of ease at which it can be used. Distance metrics are used to find similar data objects that lead to develop robust algorithms for the data mining functionalities such as classification and clustering. In this paper, the results obtained by implementing the k-means algorithm using three different metrics Euclidean, Manhattan and Minkowski distance metrics along with the comparative study of results of basic k-means algorithm which is implemented through Euclidian distance metric for two- dimensional data, are discussed. Results are displayed with the help of histograms.

273 citations


Journal ArticleDOI
TL;DR: Experimental results show that the derived multiplicative update rules exhibited good convergence behavior, and BSS tasks for several music sources with two microphones and three instrumental parts were evaluated successfully.
Abstract: This paper presents new formulations and algorithms for multichannel extensions of non-negative matrix factorization (NMF). The formulations employ Hermitian positive semidefinite matrices to represent a multichannel version of non-negative elements. Multichannel Euclidean distance and multichannel Itakura-Saito (IS) divergence are defined based on appropriate statistical models utilizing multivariate complex Gaussian distributions. To minimize this distance/divergence, efficient optimization algorithms in the form of multiplicative updates are derived by using properly designed auxiliary functions. Two methods are proposed for clustering NMF bases according to the estimated spatial property. Convolutive blind source separation (BSS) is performed by the multichannel extensions of NMF with the clustering mechanism. Experimental results show that 1) the derived multiplicative update rules exhibited good convergence behavior, and 2) BSS tasks for several music sources with two microphones and three instrumental parts were evaluated successfully.

263 citations


Journal ArticleDOI
TL;DR: The proposed K-Nearest Neighbor (KNN) algorithm as a classifier for detection of QRS-complex in ECG is evaluated on two manually annotated standard databases such as CSE and MIT-BIH Arrhythmia database and clearly establishes KNN algorithm for reliable and accurateQRS-detection.

260 citations


Book
20 Dec 2013
TL;DR: This book discusses Euclidean Geometry, an Elementary Treatise on the Geometry of the Triangle and the Circle and its Subgeometries, and its Applications.
Abstract: Elements of Non-Euclidean Geometry Advanced Euclidean Geometry (formerly TitledGeometry: Euclid and BeyondEuclidean Geometry and TransformationsExperiencing GeometryElementary Mathematics from a Higher StandpointEuclidean Geometry in Mathematical OlympiadsIntroduction to Non-Euclidean GeometryAdvanced Euclidean GeometryAdvanced Euclidean GeometryProblems and Solutions in Euclidean GeometryA High School First Course in Euclidean Plane GeometryAdvanced Euclidian GeometryA Simple Non-Euclidean Geometry and Its Physical BasisTopics in GeometryEuclidean GeometryAdvanced Euclidean Geometry Modern GemoetryAdvanced Euclidean GeometryMethods for Euclidean GeometryProblem-Solving and Selected Topics in Euclidean GeometryAdvanced Euclidean GeometryCollege GeometryGeometry Transformed: Euclidean Plane Geometry Based on Rigid MotionsExcursions in Advanced Euclidean GeometryAxiomatic GeometryEuclidean and Non-Euclidean Geometry International Student EditionAdvanced Euclidean Geometry an Elementary Treatise on the Geometry of the Triangle and the CircleTaxicab GeometryExploring Advanced Euclidean Geometry with GeoGebraAdvanced Euclidean Geometry (formerly Titled: Modern Geometry)Exploring Geometry, Second EditionAdvanced Euclidean Geometry Formerly: Modern Geometry an Elementare Treatise of the Geometry Fo the Triangle and the CircleAdvanced Euclidean GeometryGeometry of Sets and Measures in Euclidean SpacesGeometry of Complex NumbersGeometry by Its HistoryAdvanced Euclidean GeometryPlane and Solid GeometryAdvanced Euclidean GeometryEuclidean Geometry and its Subgeometries

186 citations


Journal ArticleDOI
TL;DR: A raster-based MC-SDSS that combines the analytic hierarchy process (AHP) and compromise programming methods, such as TOPSIS (technique for order preference by similarity to the ideal solution) and Ideal Point Methods is presented.

143 citations


Journal ArticleDOI
TL;DR: These metrics are a perfect and computationally cheap replacement for the root-mean-square distance (RMSD) when one has to decide whether two noise contaminated configurations are identical or not.
Abstract: In order to characterize molecular structures we introduce configurational fingerprint vectors which are counterparts of quantities used experimentally to identify structures. The Euclidean distance between the configurational fingerprint vectors satisfies the properties of a metric and can therefore safely be used to measure dissimilarities between configurations in the high dimensional configuration space. In particular we show that these metrics are a perfect and computationally cheap replacement for the root-mean-square distance (RMSD) when one has to decide whether two noise contaminated configurations are identical or not. We introduce a Monte Carlo approach to obtain the global minimum of the RMSD between configurations, which is obtained from a global minimization over all translations, rotations, and permutations of atomic indices.

138 citations


Journal ArticleDOI
TL;DR: This paper proposes a method to directly optimize the graph Laplacian in spectral hashing, which can better represent similarity between samples, and is then applied to SH for effective binary code learning.
Abstract: The ability of fast similarity search in a large-scale dataset is of great importance to many multimedia applications. Semantic hashing is a promising way to accelerate similarity search, which designs compact binary codes for a large number of images so that semantically similar images are mapped to close codes. Retrieving similar neighbors is then simply accomplished by retrieving images that have codes within a small Hamming distance of the code of the query. Among various hashing approaches, spectral hashing (SH) has shown promising performance by learning the binary codes with a spectral graph partitioning method. However, the Euclidean distance is usually used to construct the graph Laplacian in SH, which may not reflect the inherent distribution of the data. Therefore, in this paper, we propose a method to directly optimize the graph Laplacian. The learned graph, which can better represent similarity between samples, is then applied to SH for effective binary code learning. Meanwhile, our approach, unlike metric learning, can automatically determine the scale factor during the optimization. Extensive experiments are conducted on publicly available datasets and the comparison results demonstrate the effectiveness of our approach.

133 citations


Journal ArticleDOI
TL;DR: Novel algorithms that perform incremental updates that only visit cells affected by changes are presented that can be used to implement highly efficient collision checking and holonomic path planning for non-circular robots and are easy to implement.

119 citations


Journal ArticleDOI
TL;DR: In this article, the authors extend their previous work devoted to the computation of the next-next-to-leading order spin-orbit correction (corresponding to 3.5PN order) in the equations of motion of spinning compact binaries by deriving the corresponding spinorbit terms in the evolution equations for the spins, the conserved integrals of the motion and the metric regularized at the location of the particles (obtaining also the metric all over the near zone but with some lower precision).
Abstract: We extend our previous work devoted to the computation of the next-to-next-to-leading order spin–orbit correction (corresponding to 3.5PN order) in the equations of motion of spinning compact binaries by (i) deriving the corresponding spin–orbit terms in the evolution equations for the spins, the conserved integrals of the motion and the metric regularized at the location of the particles (obtaining also the metric all over the near zone but with some lower precision); (ii) performing the orbital reduction of the precession equations, near-zone metric and conserved integrals to the center-of-mass frame and then further assuming quasi-circular orbits (neglecting gravitational radiation reaction). The results are systematically expressed in terms of the spin variables with a conserved Euclidean norm instead of the original antisymmetric spin tensors of the pole–dipole formalism. This work paves the way to the future computation of the next-to-next-to-leading order spin–orbit terms in the gravitational-wave phasing of spinning compact binaries.

113 citations


Journal ArticleDOI
TL;DR: A proof for the positive definiteness of the Jaccard index matrix used as a weighting matrix in the Euclidean distance between belief functions defined in Jousselme et al.

Journal ArticleDOI
01 Feb 2013-Ecology
TL;DR: Avoidance probability models based on "ecological distance" are devised, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes, and are integrated into a likelihood-based estimation scheme for spatial capture-recapture models to estimate population density and parameters of the least-cost encounter probability model.
Abstract: Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

Journal ArticleDOI
TL;DR: The k -NN classification model which uses wind direction, air temperature, atmospheric pressure and relative humidity parameters in a 4-tupled space achieved the best wind speed prediction for k = 5 in the Manhattan distance metric.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a system using Eigen value weighted Euclidean distance as a classification technique for recognition of various sign languages of India. But, the system was not suitable for the use of hand cropping and skin filtering.
Abstract: Sign Language Recognition is one of the most growing fields of research today. Many new techniques have been developed recently in these fields. Here in this paper, we have proposed a system using Eigen value weighted Euclidean distance as a classification technique for recognition of various Sign Languages of India. The system comprises of four parts: Skin Filtering, Hand Cropping, Feature Extraction and Classification. 24 signs were considered in this paper, each having 10 samples, thus a total of 240 images was considered for which recognition rate obtained was 97%.

Journal ArticleDOI
TL;DR: Two novel distance measures are proposed, normalized between 0 and 1, and based on normalized cross-correlation for image matching, based on the fact that for natural images there is a high correlation between spatially close pixels.

Journal ArticleDOI
TL;DR: The benefits of the proposed selection scheme, as the number of receive antennas increases, are further substantiated by comparing its relative energy gain over the TAS method for a target uncoded Symbol Error Rate (SER).
Abstract: In this Letter, a low-complexity Euclidean distance-based method for antenna subset selection in Spatial Modulation systems is presented The proposed method avoids the high complexity of both the optimal exhaustive search and of a recently proposed Euclidean distance-based algorithm for performing the selection Moreover, as the number of receive antennas increases and for practical signal-to-noise ratio (SNR) values, it offers better error performance than the conventional transmit antenna selection (TAS) algorithm In addition, the benefits of the proposed selection scheme, as the number of receive antennas increases, are further substantiated by comparing its relative energy gain over the TAS method for a target uncoded Symbol Error Rate (SER)

Proceedings ArticleDOI
01 Dec 2013
TL;DR: Although recurrence plots cannot provide the best accuracy rates for all data sets, it is demonstrated that it can be predicted ahead of time that the method will outperform the time representation with Euclidean and Dynamic Time Warping distances.
Abstract: There is a huge increase of interest for time series methods and techniques. Virtually every piece of information collected from human, natural, and biological processes is susceptible to changes over time, and the study of how these changes occur is a central issue in fully understanding such processes. Among all time series mining tasks, classification is likely to be the most prominent one. In time series classification there is a significant body of empirical research that indicates that k-nearest neighbor rule in the time domain is very effective. However, certain time series features are not easily identified in this domain and a change in representation may reveal some significant and unknown features. In this work, we propose the use of recurrence plots as representation domain for time series classification. Our approach measures the similarity between recurrence plots using Campana-Keogh (CK-1) distance, a Kolmogorov complexity-based distance that uses video compression algorithms to estimate image similarity. We show that recurrence plots allied to CK-1 distance lead to significant improvements in accuracy rates compared to Euclidean distance and Dynamic Time Warping in several data sets. Although recurrence plots cannot provide the best accuracy rates for all data sets, we demonstrate that we can predict ahead of time that our method will outperform the time representation with Euclidean and Dynamic Time Warping distances.

Journal ArticleDOI
TL;DR: The results show that the algorithm outperforms previous approaches in most cases, achieving a detection rate of 98.8% and a false alarm rate as low as 0.20-0.37%, whereas for heavily structured yarns misdetection rate can be as high as 5%.

Journal ArticleDOI
TL;DR: A new similarity measure based on the weighted Euclidean distance is proposed in order to cope with nonlinearity and to enhance estimation accuracy of LW-PLS.

Journal ArticleDOI
TL;DR: A novel, simple, and effective distance-based method for estimation of the AD in case of developed and validated predictive predictive counter-propagation artificial neural network (CP ANN) models through a proficient exploitation of the euclidean distance (ED) metric in the structure-representation vector space is introduced.

Posted Content
TL;DR: A system using Eigen value weighted Euclidean distance as a classification technique for recognition of various Sign Languages of India, which comprises of four parts: Skin Filtering, Hand Cropping, Feature Extraction and Classification.
Abstract: Sign Language Recognition is one of the most growing fields of research today. Many new techniques have been developed recently in these fields. Here in this paper, we have proposed a system using Eigen value weighted Euclidean distance as a classification technique for recognition of various Sign Languages of India. The system comprises of four parts: Skin Filtering, Hand Cropping, Feature Extraction and Classification. Twenty four signs were considered in this paper, each having ten samples, thus a total of two hundred forty images was considered for which recognition rate obtained was 97 percent.

Journal ArticleDOI
TL;DR: It is recommended that in most situations the accumulated-cost of a least-cost path provides a more appropriate measure of connectivity between locations as it accounts for both the distance travelled and costs traversed, and that the generation of vector least- cost paths should be reserved for visualisation purposes.
Abstract: Least-cost modelling has become a popular method for measuring connectivity. By representing the landscape as a cost-surface, least-cost paths can be calculated that represent the route of maximum efficiency between two locations as a function of the distance travelled and the costs traversed. Both the length and the accumulated-cost of a least-cost path have been used as measures of connectivity between pairs of locations. However, we are concerned that in some situations the length of a least-cost path may provide a misleading measure of connectivity as it only accounts for the distance travelled while ignoring the costs traversed, and results in a measure that may be little better than Euclidean distance. Through simulations using fractal landscapes we demonstrate that least-cost path length is often highly correlated with Euclidean distance. This indicates that least-cost path length provides a poor measure of connectivity in many situations, as it does not capture sufficient information about the ecological costs to movement represented by the cost-surface. We recommend that in most situations the accumulated-cost of a least-cost path provides a more appropriate measure of connectivity between locations as it accounts for both the distance travelled and costs traversed, and that the generation of vector least-cost paths should be reserved for visualisation purposes.

Journal ArticleDOI
TL;DR: The NN search problem is defined as follows: Given a set S containing points in a metric space M, and a query point x !
Abstract: Comparing two signals is one of the most essential and prevalent tasks in signal processing. A large number of applications fundamentally rely on determining the answers to the following two questions: 1) How should two signals be compared? 2) Given a set of signals and a query signal, which signals are the nearest neighbors (NNs) of the query signal, i.e., which signals in the database are most similar to the query signal? The NN search problem is defined as follows: Given a set S containing points in a metric space M, and a query point x !M, find the point in S that is closest to x. The problem can be extended to K-NN, i.e., determining the K signals nearest to x. In this context, the points in question are signals, such as images, videos, or other waveforms. The qualifier closest refers to a distance metric, such as the Euclidean distance or Manhattan distance between pairs of points in S. Finding the NN of the query point should be at most linear in the database size and is a well-studied problem in conventional NN settings.

Journal ArticleDOI
TL;DR: In this method, first a term about the spatial constraints derived from the image is introduced into the objective function of GFCM, and then the kernel induced distance is adopted to substitute the Euclidean distance in the new objective function.

Journal ArticleDOI
TL;DR: Though the proposed method exploits only one training sample per class to perform classification, it might obtain a better performance than the nearest feature space method proposed in Chien and Wu, which depends on all the training samples to classify the test sample.
Abstract: In this paper, we propose a very simple and fast face recognition method and present its potential rationale. This method first selects only the nearest training sample, of the test sample, from every class and then expresses the test sample as a linear combination of all the selected training samples. Using the expression result, the proposed method can classify the testing sample with a high accuracy. The proposed method can classify more accurately than the nearest neighbor classification method (NNCM). The face recognition experiments show that the classification accuracy obtained using our method is usually 2–10% greater than that obtained using NNCM. Moreover, though the proposed method exploits only one training sample per class to perform classification, it might obtain a better performance than the nearest feature space method proposed in Chien and Wu (IEEE Trans Pattern Anal Machine Intell 24:1644–1649, 2002), which depends on all the training samples to classify the test sample. Our analysis shows that the proposed method achieves this by modifying the neighbor relationships between the test sample and training samples, determined by the Euclidean metric.

Journal ArticleDOI
TL;DR: The Monte Carlo simulation results for SM-EDAS-LC are erroneous, thus the correct results for the scheme are presented, which demonstrates that the error performance in fact outperforms that of Capacity Optimized Antenna Selection for SM.
Abstract: The first contribution of this letter is to correct an error in the formulation of the Low-Complexity Euclidean Distance optimized Antenna Selection scheme for Spatial Modulation (SM), (SM-EDAS-LC). Secondly, due to a simulation error the Monte Carlo simulation results presented in the above letter for SM-EDAS-LC are erroneous, thus we present the correct results for the scheme, which demonstrates that the error performance in fact outperforms that of Capacity Optimized Antenna Selection for SM. The third contribution is to propose two modifications to SM-EDAS-LC that significantly reduce the computational complexity.

Journal ArticleDOI
TL;DR: A new technique for radiometric normalization is proposed, which uses three sequential methods for an accurate PIFs selection: spectral measures of temporal data, density scatter plot analysis (ridge method), and robust regression.
Abstract: Radiometric precision is difficult to maintain in orbital images due to several factors (atmospheric conditions, Earth-sun distance, detector calibration, illumination, and viewing angles). These unwanted effects must be removed for radiometric consistency among temporal images, leaving only land-leaving radiances, for optimum change detection. A variety of relative radiometric correction techniques were developed for the correction or rectification of images, of the same area, through use of reference targets whose reflectance do not change significantly with time, i.e., pseudo-invariant features (PIFs). This paper proposes a new technique for radiometric normalization, which uses three sequential methods for an accurate PIFs selection: spectral measures of temporal data (spectral distance and similarity), density scatter plot analysis (ridge method), and robust regression. The spectral measures used are the spectral angle (Spectral Angle Mapper, SAM), spectral correlation (Spectral Correlation Mapper, SCM), and Euclidean distance. The spectral measures between the spectra at times t1 and t2 and are calculated for each pixel. After classification using threshold values, it is possible to define points with the same spectral behavior, including PIFs. The distance and similarity measures are complementary and can be calculated together. The ridge method uses a density plot generated from images acquired on different dates for the selection of PIFs. In a density plot, the invariant pixels, together, form a high-density ridge, while variant pixels (clouds and land cover changes) are spread, having low density, facilitating its exclusion. Finally, the selected PIFs are subjected to a robust regression (M-estimate) between pairs of temporal bands for the detection and elimination of outliers, and to obtain the optimal linear equation for a given set of target points. The robust regression is insensitive to outliers, i.e., observation that appears to deviate strongly from the rest of the data in which it occurs, and as in our case, change areas. New sequential methods enable one to select by different attributes, a number of invariant targets over the brightness range of the images.

Proceedings ArticleDOI
01 Jan 2013
TL;DR: Here facial images of three subjects with different expression and angles are used for classification and the results show that the Manhattan distance performs better than the Euclidean Distance.
Abstract: The face expression recognition problem is challenging because different individuals display the same expression differently [1].Here PCA algorithm is used for the feature extraction. Distance metric or matching criteria is the main tool for retrieving similar images from large image databases for the above category of search. Two distance metrics, such as the L1 metric (Manhattan Distance), the L2 metric (Euclidean Distance) have been proposed in the literature for measuring similarity between feature vectors. In content-based image retrieval systems, Manhattan distance and Euclidean distance are typically used to determine similarities between a pair of image [2]. Here facial images of three subjects with different expression and angles are used for classification. Experimental results are compared and the results show that the Manhattan distance performs better than the Euclidean Distance.

Journal ArticleDOI
TL;DR: It is reasoned in this paper that PSD matrices have structural constraints and that they describe a manifold in the signal space, so a more appropriate measure is the Riemannian distance on the manifold and an optimum weighting matrix is developed for the application of RD to signal classification.
Abstract: Signal classification is an important issue in many branches of science and engineering. In signal classification, a feature of the signals is often selected for similarity comparison. A distance metric must then be established to measure the dissimilarities between different signal features. Due to the natural characteristics of dynamic systems, the power spectral density (PSD) of a signal is often used as a feature to facilitate classification. We reason in this paper that PSD matrices have structural constraints and that they describe a manifold in the signal space. Thus, instead of the widely used Euclidean distance (ED), a more appropriate measure is the Riemannian distance (RD) on the manifold. Here, we develop closed-form expressions of the RD between two PSD matrices on the manifold and study some of the properties. We further show how an optimum weighting matrix can be developed for the application of RD to signal classification. These new distance measures are then applied to the classification of electroencephalogram (EEG) signals for the determination of sleep states and the results are highly encouraging.

Journal ArticleDOI
TL;DR: In the context of highly unbalanced data classes, these back-end systems can improve the performance achieved by GMMs provided that an appropriate sampling or importance weighting technique is applied and it is shown that anchor models based on the euclidean or cosine distances present a better alternative to enhance performances.
Abstract: In this paper, we study the effectiveness of anchor models applied to the multiclass problem of emotion recognition from speech. In the anchor models system, an emotion class is characterized by its measure of similarity relative to other emotion classes. Generative models such as Gaussian Mixture Models (GMMs) are often used as front-end systems to generate feature vectors used to train complex back-end systems such as support vector machines (SVMs) or a multilayer perceptron (MLP) to improve the classification performance. We show that in the context of highly unbalanced data classes, these back-end systems can improve the performance achieved by GMMs provided that an appropriate sampling or importance weighting technique is applied. Furthermore, we show that anchor models based on the euclidean or cosine distances present a better alternative to enhance performances because none of these techniques are needed to overcome the problem of skewed data. The experiments conducted on FAU AIBO Emotion Corpus, a database of spontaneous children's speech, show that anchor models improve significantly the performance of GMMs by 6.2 percent relative. We also show that the introduction of within-class covariance normalization (WCCN) improves the performance of the anchor models for both distances, but to a higher extent for euclidean distance for which the results become competitive with cosine distance.