scispace - formally typeset
Search or ask a question

Showing papers on "Euclidean distance published in 1989"


Journal ArticleDOI
TL;DR: The results indicate that the De Soete procedure is effective in identifying extraneous variables which do not contribute information about the true cluster structure.
Abstract: De Soete (1986, 1988) proposed a variable weighting procedure when Euclidean distance is used as the dissimilarity measure with an ultrametric hierarchical clustering method. The algorithm produces weighted distances which approximate ultrametric distances as closely as possible in a least squares sense. The present simulation study examined the effectiveness of the De Soete procedure for an applications problem for which it was not originally intended. That is, to determine whether or not the algorithm can be used to reduce the influence of variables which are irrelevant to the clustering present in the data. The simulation study examined the ability of the procedure to recover a variety of known underlying cluster structures. The results indicate that the algorithm is effective in identifying extraneous variables which do not contribute information about the true cluster structure. Weights near 0.0 were typically assigned to such extraneous variables. Furthermore, the variable weighting procedure was not adversely effected by the presence of other forms of error in the data. In general, it is recommended that the variable weighting procedure be used for applied analyses when Euclidean distance is employed with ultrametric hierarchical clustering methods.

79 citations


Journal ArticleDOI
TL;DR: In this article, the notion of Euclidean t-design is analyzed in the framework of appropriate inner product spaces of polynomial functions, and Fisher type inequalities are obtained in a simple manner by this method.

74 citations


Journal ArticleDOI
TL;DR: An Extended Two-Way Euclidean Multidimensional Scaling (MDS) model which assumes both common and specific dimensions is described and contrasted with the “standard” (Two-Way) MDS model.
Abstract: An Extended Two-Way Euclidean Multidimensional Scaling (MDS) model which assumes both common and specific dimensions is described and contrasted with the “standard” (Two-Way) MDS model. In this Extended Two-Way Euclidean model then stimuli (or other objects) are assumed to be characterized by coordinates onR common dimensions. In addition each stimulus is assumed to have a dimension (or dimensions) specific to it alone. The overall distance between objecti and objectj then is defined as the square root of the ordinary squared Euclidean distance plus terms denoting the specificity of each object. The specificity,s j , can be thought of as the sum of squares of coordinates on those dimensions specific to objecti, all of which have nonzero coordinatesonly for objecti. (In practice, we may think of there being just one such specific dimension for each object, as this situation is mathematically indistinguishable from the case in which there are more than one.) We further assume that δ ij =F(d ij ) +e ij where δ ij is the proximity value (e.g., similarity or dissimilarity) of objectsi andj,d ij is the extended Euclidean distance defined above, whilee ij is an error term assumed i.i.d.N(0, σ2).F is assumed either a linear function (in the metric case) or a monotone spline of specified form (in the quasi-nonmetric case). A numerical procedure alternating a modified Newton-Raphson algorithm with an algorithm for fitting an optimal monotone spline (or linear function) is used to secure maximum likelihood estimates of the paramstatistics) can be used to test hypotheses about the number of common dimensions, and/or the existence of specific (in addition toR common) dimensions. This approach is illustrated with applications to both artificial data and real data on judged similarity of nations.

67 citations



Journal ArticleDOI
TL;DR: It is shown here that for some typical unsimilarities ρ, there exists an optimal partition such that the intersection of P j with the convex hull of P i is empty for all i j .

55 citations


Journal ArticleDOI
11 Jun 1989
TL;DR: In this paper, the error probability at large signal-to-noise ratios of rate-1/2 convolutionally encoded CPFSK (continuous phase frequency-shift keying) with an optimum non-coherent detector on an additive white Gaussian noise channel was considered.
Abstract: The authors consider the error probability at large signal-to-noise ratios of rate-1/2 convolutionally encoded CPFSK (continuous-phase frequency-shift keying) with an optimum noncoherent detector on an additive white Gaussian noise channel. The performance is given in terms of a parameter called the minimum squared normalized equivalent Euclidean distance which plays the same role mathematically as the minimum squared normalized Euclidean distance used for coherent detectors. It is shown that by introducing convolutional coding the error performance is significantly improved. The authors propose a decoding algorithm for these convolutionally encoded CPM schemes which is based on a limited tree search algorithm and uses the maximum-likelihood decision rule for noncoherent detection. Computer simulations show that the degradation in error performance compared to the performance of the optimum coherent Viterbi detector is less than 1 dB with a relatively simple noncoherent detector on an additive white Gaussian noise channel for most of the schemes considered. >

36 citations


Proceedings ArticleDOI
27 Nov 1989
TL;DR: In this article, the concept of superquadric contraction and dilation is introduced and used to derive a novel interpretation of the modified super-quadric inside-outside function in terms of contraction/expansion factor.
Abstract: Evaluation criteria for superquadric models recovered from the range data discussed. Arguments are presented to support the authors' belief that both quantitative and qualitative measures are required in order to evaluate a superquadric fit. The concept of superquadric contraction and dilation is introduced and used to derive a novel interpretation of the modified superquadric inside-outside function in terms of contraction/expansion factor. The same concept also gives a close initial guess for the numerical procedure computing the minimum Euclidean distance of a point from a superquadric model. The minimum Euclidean distance map is introduced as a qualitative criterion for interpretation of fit. View-dependent qualitative measures like the contour-difference map and the Z-distance map are shown to be essential for the complete evaluation of the models. Analytical solution and techniques for the contour generator on superquadric models are presented. Finally, examples of real objects are given to generate the measures. >

35 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider constructing marginal and conditional likelihoods by using densities expressed in terms of an intrinsic choice of support measure, based on either Euclidean metric or the constant information metric.
Abstract: SUMMARY Conditional and marginal likelihoods constructed from parameter-dependent functions of the data have an additional parameter dependence that changes with the choice of supporting metric for the corresponding densities. We consider constructing marginal and conditional likelihoods by using densities expressed in terms of an intrinsic choice of support measure, based on either Euclidean metric or the constant information metric. Barndorff-Nielsen's (1983) approximation to the distribution of the maximum likelihood estimate is used to approximate these conditional and marginal likelihoods. The resulting likelihoods are closely related to the modified profile likelihood of Barndorff-Nielsen (1983) and the approximate conditional likelihood of Cox & Reid (1987). The normal circle problem is examined in some detail.

35 citations


Journal ArticleDOI
TL;DR: In this article, the transforms of mathematical morphology may be generated by convolutions followed by thresholdings, which leads to a wide set of non-linear operators and structuring elements, including powerful directional transforms and an accurate iterative approach of a circular structuring element of any size and thus to a good approximation of the Euclidean distance.
Abstract: SUMMARY The transforms of mathematical morphology may be generated by convolutions followed by thresholdings. Furthermore, the convolution-thresholding pair may generalize mathematical morphology in three directions thanks to adjustments of the thresholding level, weighting of the coefficients of the convolution matrix and iterations. This leads to a wide set of non-linear operators and structuring elements, including powerful directional transforms and an accurate iterative approach of a circular structuring element of any size and thus to a good approximation of the Euclidean distance.

32 citations


Journal ArticleDOI
TL;DR: A study has been made of the influence of six different metrics and four data transformations on the KNN method applied to several data sets and shows that the Lance-Williams, Manhattan and Camberra metrics give error rates comparable to and, in some cases, better than those obtained by the linear discriminant analysis.

31 citations


Journal ArticleDOI
TL;DR: A parallel implementation of the optimum (or maximum likelihood Gaussian) classifier that uses a cellular automaton to very rapidly find the output vector with minimum Euclidean distance from the input vector is presented.

Journal ArticleDOI
TL;DR: It is shown that for a given infinite system of linear inequalities (satisfying certain conditions), the Euclidean distance from a vector x to the solution set of the system is equivalent to the “biggest violation” by x of theSystem.

Proceedings ArticleDOI
01 Jan 1989
TL;DR: A method is proposed for learning in multilayer perceptrons that includes self-adapting features that make it suited to deal with a variety of problems without the need for parameter readjustments and yields substantially shorter learning times.
Abstract: A method is proposed for learning in multilayer perceptrons. It includes self-adapting features that make it suited to deal with a variety of problems without the need for parameter readjustments. The validity of the approach is benchmarked for two types of problems. The first benchmark is performed for the topologically complex parity problem. The number of binary inputs range from, 2 (simplest Exclusive OR problem) to 7 (much more complex problem). The statistically averaged learning times obtained, which are reduced by two to three orders of magnitude, are compared with the best possible results obtained by conventional error backpropagation (EBP). The second problem type occurs when a high accuracy in separating example classes is needed. This corresponds to instances where different output sign patterns of the MLP are requested for slightly different input variables. When the minimum Euclidean distance epsilon between the classes to be separated decreases, the best learning times obtained with conventional EBP increase roughly as I/ epsilon /sup 2/. The present algorithm yields substantially shorter learning times that behave like log (1/ epsilon ). >

Proceedings ArticleDOI
11 Jun 1989
TL;DR: The authors present a polynomial-time codeword-mapping algorithm which seeks to minimize the Euclidean distance between all pairs of codewords with a relative Hamming distance of 1, permitting the robust performance of vector quantizations in noisy channels, without a sacrifice in coding bandwidth or quantizer performance.
Abstract: The authors present a polynomial-time codeword-mapping algorithm which seeks to minimize the Euclidean distance between all pairs of codewords with a relative Hamming distance of 1, permitting the robust performance of vector quantizations in noisy channels, without a sacrifice in coding bandwidth or quantizer performance. The algorithm is based upon network design considerations, and useful gains of several decibels can be obtained in the error distortion by using the new codeword mapping, without a sacrifice in coding bandwidth or quantizer performance. Results with application to image coding are given, and it is also shown that some natural codes are good robust codes under selected conditions. >

Journal ArticleDOI
TL;DR: In this paper, the authors deal with local and global characterizations of Euclidean hyperspheres by using relative normalizations of locally strongly convex hypersurfaces in the Euclidian space ℝn+1.
Abstract: This paper deals with local and global characterizations of Euclidean hyperspheres by using relative normalizations of locally strongly convex hypersurfaces in the Euclidean space ℝn+1. Especially we get characterizations of Euclidean hyperspheres by using terms of affine differential geometry and terms of differential geometry with respect to the Euclidean second fundamental form.

Journal ArticleDOI
TL;DR: In this paper, the problem of maximizing the minimum eigenvalue of a weighted sum of symmetric matrices when the Euclidean norm of the vector of weights is constrained to be unity is considered.
Abstract: The problem considered is that of maximizing, with respect to the weights, the minimum eigenvalue of a weighted sum of symmetric matrices when the Euclidean norm of the vector of weights is constrained to be unity. A procedure is given for determining the sign of the maximum of the minimum eigenvalue and for approximating the optimal weights arbitrarily accurately when that sign is positive or zero. Linear algebra, a conical hull representation of the set of $n \times n$ symmetric positive semidefinite matrices and convex programming are employed.


Book ChapterDOI
01 Jan 1989
TL;DR: In this article, a wavization procedure is given to map the Euclidean manifold of rays and wavefronts in terms of coset spaces of the Euler group.
Abstract: Euclidean optics are models of the manifold of rays and wavefronts in terms of coset spaces of the Euclidean group. One realization of this construction is the geometric model of Hamilton's optical phase space. Helmholtz optics is a second Euclidean model examined here. A wavization procedure is given to map the former one on the latter. Noneuclidean transformations of the manifold of rays are provided by Lorentz boosts that produce a global “4a” comatic aberration.


Journal ArticleDOI
TL;DR: A two-stage classification algorithm which places evoked potential waveforms into clinically significant classes and the development of a dissimilarity measure is crucial to the overall success of the algorithm.


Proceedings ArticleDOI
11 Jun 1989
TL;DR: An altered form of the Fano algorithm using information derived from an independent decision-feedback equalizer is shown to achieve near-maximum-likelihood performance with a lower computational complexity than that of the Viterbi algorithm at high signal-to-noise ratios.
Abstract: A decision-aided sequential search algorithm is presented for the recovery of digital signals corrupted by intersymbol interference. An altered form of the Fano algorithm using information derived from an independent decision-feedback equalizer is shown to achieve near-maximum-likelihood performance with a lower computational complexity than that of the Viterbi algorithm at high signal-to-noise ratios. Simulated probability-of-error and computational complexity curves are presented. The Fano metric for sequential sequence estimation (SSE) on intersymbol interference (ISI) channels with Gaussian noise is derived. A suboptimum constant bias metric, similar to the Viterbi Euclidean distance metric, is studied. The SSW algorithm is shown to be effective for channels which have severe ISI. >

Journal ArticleDOI
Tang Zizhou1
TL;DR: Two non-existence theorems on harmonic polynomial morphisms between Euclidean spaces have been shown as discussed by the authors, which is the only known nonexistence theorem on harmonic morphisms.
Abstract: Two non-existence theorems on harmonic polynomial morphisms between Euclidean spaces have been shown.

Journal ArticleDOI
TL;DR: In this paper, it has been shown that in every four-colouring of this graph, every colour class is every-where dense, and that the chromatic number of the graph is at most 4.
Abstract: LetQ 4 denote the graph, obtained from the rational points of the 4-space, by connecting two points iff their Euclidean distance is one. It has been known that its chromatic number is 4. We settle a problem of P. Johnson, showing that in every four-colouring of this graph, every colour class is every-where dense.

Journal ArticleDOI
TL;DR: An efficient algorithm for determining the maxima of the relative error of Euclidean distance by an m-neighbor distance is presented and the existence of a still simpler and more efficient algorithm is conjectured.

Journal ArticleDOI
TL;DR: It is found that from the point of view of minimum distance and spectral properties, multi-T phase codes are very similar to multi-h codes and it is shown that quaternary codes give considerable improvement over binary codes.
Abstract: The minimum Euclidean distance properties, upper bounds on the minimum distance, power density spectrum, and power-bandwidth tradeoff for M-ary phase codes where K different symbol lengths are used in a cyclical manner are considered. When these lengths are all related to each other as rational numbers, they give a finite-state Markov (trellis) description of the signal. Here, K=2 is assumed. It is found that from the point of view of minimum distance and spectral properties, multi-T phase codes are very similar to multi-h codes. It is also shown that quaternary codes give considerable improvement over binary codes. >

Journal ArticleDOI
TL;DR: Several properties of these metrices including the closed form analytical expressions, the minimal paths and path-tracing algorithms, the circles and the error estimates with the Euclidean distance are presented.

Journal ArticleDOI
TL;DR: In this article, it was shown that γ( T ) is always non-empty whenver there are at least two operators in T. Unlike most concepts of joint spectrum, the set of T is part of real Euclidean space.
Abstract: A.McIntosh and A. Pryde introduced and gave some applications of notion of “spectral set”, γ( T ), associated with each finite, commuting family of continuous linear operators T in a Banach space. Unlike most concepts of joint spectrum, the set γ( T ) is part of real Euclidean space. It is shown that γ( T ) is always non-empty whenver there are at least two operators in T.

Journal ArticleDOI
TL;DR: Group-theoretic methods are used to obtain a system of partial differential equations for the most general curve form invariant under a given group of transformations, which yields a complete classification of both the Euclidean invariant non-adaptive curves, and a well defined sub-class of the Euclid's invariant adaptive curves in two dimensions.

Proceedings ArticleDOI
23 May 1989
TL;DR: The current Markov model-based speech-recognition system is shown to be more effective than recognizers using Maharanobis' distance, Euclidean distance, or the absolute distance.
Abstract: A report is presented on a novel application of the Markov model to an automatic speech-recognition system, in which the feature vectors of speech represent the states of the Markov model, the transition probability of the states is represented by a multidimensional normal density function of the feature vector, and the DP (dynamic programming) matching algorithm is used for calculating the optimum time sequence of the states. Based on experimentation with the system in a speaker-independent mode, using a vocabulary of ten Japanese single-digit numerals, the current system is shown to be more effective than recognizers using Maharanobis' distance, Euclidean distance, or the absolute distance. >