scispace - formally typeset
Search or ask a question

Showing papers on "Euclidean distance published in 1978"


Journal ArticleDOI
TL;DR: In this article, a family of self-dual positive definite metrics which are asymptotic to Euclidean space modulo identifications under discrete subgroups of O(4) was presented.

752 citations


Journal ArticleDOI
TL;DR: Shepard as discussed by the authors proposed a global representation for an interpolant which satisfies a maximum principle and reproduces constant functions, which can be generalized to any Euclidean metric, including partial derivative data at the interpolation points.
Abstract: Shepard developed a scheme for interpolation to arbitrarily spaced discrete bivariate data. This scheme provides an explicit global representation for an interpolant which satisfies a maximum principle and which reproduces constant functions. The interpolation method is basically an inverse distance formula which is generalized to any Euclidean metric. These techniques extend to include interpolation to partial derivative data at the interpolation points.

219 citations


Journal ArticleDOI
TL;DR: In this article, a similarity generating function composed of elevation, scatter and shape parameters is presented, and linear models for integrating these parameters either for euclidean distance or vector-product association indices.
Abstract: The aims of this paper are: (1) to present a similarity generating function composed of elevation, scatter and shape parameters; (2) to describe linear models for integrating these parameters either for euclidean distance or vector-product association indices; and (3) to suggest a computational strategy based upon the Eckart-Young (1936) theorem that has certain advantages for minimizing the effects of measurement error in estimating profile similarity. Given these developments, the investigator may differentiate the independent contribution of each parameter to more global indices of resemblance. A brief example from the classification of psychopathology is discussed.

125 citations




Journal ArticleDOI
TL;DR: This paper describes a computational method for weighted euclidean distance scaling which combines aspects of an “analytic” solution with an approach using loss functions, and gives essentially the same solutions as INDSCAL for two moderate-size data sets tested.
Abstract: This paper describes a computational method for weighted euclidean distance scaling which combines aspects of an “analytic” solution with an approach using loss functions. We justify this new method by giving a simplified treatment of the algebraic properties of a transformed version of the weighted distance model. The new algorithm is much faster than INDSCAL yet less arbitrary than other “analytic” procedures. The procedure, which we call SUMSCAL (subjectivemetricscaling), gives essentially the same solutions as INDSCAL for two moderate-size data sets tested.

60 citations


Journal ArticleDOI
TL;DR: Human face profiles are automatically recognized by computer processes and absolute recognition accuracy ranges from 86 to nearly 100% under various conditions for a population of 121 persons having 3 poses each on file.

58 citations


Proceedings ArticleDOI
23 Aug 1978
TL;DR: This paper will examine cases and present techniques for correctly clipping the line segments in some constructions which show up as lines marked invisible when they should be visible in picture generation.
Abstract: Clipping is the process of determining how much of a given line segment lies within the boundaries of the display screen. Homogeneous coordinates are a convenient mathematical device for representing and transforming objects. The space represented by homogeneous coordinates is not, however, a simple Euclidean 3-space. It is, in fact, analagous to a topological shape called a “projective plane”. The clipping problem is usually solved without consideration for the differences between Euclidean space and the space represented by homogeneous coordinates. For some constructions, this leads to errors in picture generation which show up as lines marked invisible when they should be visible. This paper will examine these cases and present techniques for correctly clipping the line segments.

55 citations


Book ChapterDOI
01 Jan 1978
TL;DR: In this article, a statistical procedure that recovers from cross-sectional survey thermometer score data the spatial configuration of candidates and citizens, and the dimensionality of the issue space and the relative saliency of these issues is discussed.
Abstract: Publisher Summary This chapter discusses a statistical procedure that recovers from cross-sectional survey thermometer score data the spatial configuration of candidates and citizens, and the dimensionality of the issue space and the relative salience of these issues. Spatial theory seeks to ascertain the policies candidates adopt in an election. The principal assumption of the theory is that candidate strategies and citizen preferences can be represented in an n-dimensional Euclidean coordinate system with a Euclidean distance metric used to describe citizen utility functions. Using various multidimensional scaling methods, political scientists are beginning to study spatial models. The most relevant examples are Rusk and Weisberg's analysis of SRC thermometer data, and Rabinowitz's nonmetric algorithm for recovering candidate spatial positions from mass survey data. The chapter focuses on an alternative algebraic scaling model and a statistical methodology for estimating the parameters of a theoretical model of election competition.

54 citations


Journal ArticleDOI
TL;DR: The Canberra measure and the absolute Euclidean distance were the least successful, with at least one data set, in providing classifications that were similar to the classifications obtained by Braun-Blanquet sorting technique as mentioned in this paper.
Abstract: Some characteristics of the more commonly used similarity coefficients have been discussed. Coefficients with undesirable properties include: the product moment correlation coefficient, some information statistics, the relative homogeneity function, the weighted similarity coefficient, the Euclidean (absolute) distance, Gleason's, Ellenberg's and Spatz's similarity indices, and the absolute value function. Of the six coefficients that were tested with two sets of phytosociological data, the Canberra measure and the absolute Euclidean distance were the least successful, with at least one data set, in providing classifications that were similar to the classifications obtained by the Braun-Blanquet sorting technique. The standard Euclidean distance and the similarity ratio had intermediate success. The Czekanowski coefficient, especially in its relativized from (= relative absolute value function), was the most successful. This latter coefficient is cover-weighted and therefore the Canberra measure, although it has some undesirable characteristics, may be valuable for investigating releve similarities that are based on species with low cover. The transformation of Coetzee & Werger (1973) appears to be appropriate as a conversion of the cover-abundance values. The mean cover percent values corresponding to the cover-abundance values gave poor results. The relativized Czekanowski coefficient should be suitable at the lower syntaxonomical levels. At higher levels, qualitative coefficients may be sufficient for determining similarities between syntaxa. Qualitative coefficients will not always be successful at the lower levels — this was illustrated with the test data. In this study, the clustering procedure of group average sorting was used to construct the dendrogram. It gives an average similarity value within the dendrogram groups. These values can be used to give quantitative definitions to syntaxonomic rank.

46 citations


Book ChapterDOI
TL;DR: In this paper, the relation between stimulus analyzability and perceived dimensional structure is discussed, and the subjective independence of dimensional combinations by the algebraic properties of the additive difference model is evaluated.
Abstract: Publisher Summary This chapter discusses the relation between stimulus analyzability and perceived dimensional structure. The combination of integral dimensions yields stimuli that are phenomenologically fused or wholistic, whereas the combination of separable dimensions yields stimuli with perceptually distinct components. Operationally, integral dimensions produce a Euclidean metric in direct distance scaling, classifications based on a distance or similarity structure in a restricted classification task, a redundancy gain in speeded classification when the dimensions are correlated, and interference in a filtering task when the dimensions are orthogonal and selective attention is required. Some dimensional combinations meet these operational criteria extremely well; value and chroma of a single Munsell chip uniformly produce results that are consistent with the operational definition of integral dimensions and, similarly, size of circle and angle of radial line are clearly separable. The evaluation of the subjective independence of dimensional combinations by the algebraic properties of the additive difference model is important in drawing the distinction between integral and separable dimensions.

Journal ArticleDOI
TL;DR: For any Feynman amplitude, where any subset of invariants and/or squared masses is scaled by a real parameter λ going to zero or infinity, the existence of an expansion in powers of λ and lnλ is proved, and a method is given for determining such an expansion as discussed by the authors.
Abstract: For any Feynman amplitude, where any subset of invariants and/or squared masses is scaled by a real parameter λ going to zero or infinity, the existence of an expansion in powers of λ and lnλ is proved, and a method is given for determining such an expansion. This is shown quite generally in euclidean metric, whatever the external momenta (exceptional or not) and the internal masses (vanishing or not) may be, and for some simple cases in minkowskian metric, assuming only finiteness of the — eventually renormalized — amplitude before scaling. The method uses what is called “Multiple Mellin representation”, the validity of which is related to a “generalized power-counting” theorem.

Journal ArticleDOI
James L. Blue1
TL;DR: This paper describes a successful version of a subprogram to find the Euclidean norm of an n-vector which is accurate and efficient, and should avoid all overflows and underflows, and is also portable.
Abstract: A set of For t ran subprograms for performing the basic operations of linear algebra [4, 5, 6] should include a subprogram to find the Euclidean norm of an n-vector , [[ x II -(~,~-1 x~2) 1/2. Such a subprogram should be accurate and efficient, and should avoid all overflows and underflows. The problem appears much easier than it is. Prel iminary versions of the subprogram, by several authors, failed a t least two of these requirements. This paper describes a successful version which is also portable. All machinedependent constants are combinations of the basic machine constants defined by Fox et al. [3]; therefore the programs are portable. A program incorporating the algori thm is included. To avoid overflow, large x, mus t be scaled down. Let R be the largest positive floating-point number representable on the computer being used. Then for any x~ such tha t I x~l > R ~/~, x~ 2 will overflow, al though II x I / m a y not overflow. A simple way of avoiding overflow is the following. Let

Journal ArticleDOI
TL;DR: These experiments investigated two characteristics of subjects’ multidimensional representations: their dimensional organization and metric structure, for both analyzable and integral stimuli, and the superiority of dimensional vs. metric structure as an indicator of stimulus analyzability.
Abstract: The present experiments investigated two characteristics of subjects’ multidimensional representations: their dimensional organization and metric structure, for both analyzable and integral stimuli. In Experiment 1, subjects judged the dissimilarity between all pairs of stimuli differing in brightness and size (analyzable stimuli), while in Experiment 2, subjects made dissimilarity judgments for stimuli varying in width height, and area shape (integral stimuli). For the brightness size stimuli, the findings that (a) brightness judgments were independent of size (and vice versa) and (b) the best fitting scaling solution was one that depicted an orthogonal structure are strong evidence that subjects perceived brightness size as a dimensionally organized structure. In contrast, for the rectangle stimuli, neither width height nor area shape contributed additively to overall dissimilarity. The results of the metric fitting were more equivocal. For all stimulus sets, the Euclidean metric yielded scaling solutions with lower stress values than the city block metric. When bidimensional ratings were regressed on unidimensional ratings, the city block metric yielded a slightly higher correlation coefficient than the Euclidean metric for brightness size stimuli. The two rules of combination were equivalent for the width-height stimuli, but the Euclidean metric provided a better approximation for the area shape stimuli. The results were discussed in terms of how subjects integrate physical dimensions for the case of integral stimuli and the superiority of dimensional vs. metric structure as an indicator of stimulus analyzability.



Journal ArticleDOI
TL;DR: In this paper, a substantially improved bound for the difference between the least squares and the best linear unbiased estimators in a linear model with nonsingular covariance structure is presented.
Abstract: Haberman's bound for a norm of the difference between the least squares and the best linear unbiased estimators in a linear model with nonsingular covariance structure is examined in the particular case when a vector norm involved is taken as the Euclidean one. In this frequently occurring case, a new substantially improved bound is developed which, furthermore, is applicable regardless of any additional condition.

Journal ArticleDOI
TL;DR: In this article, an algorithm for finding the point in a compact polyhedral set with smallest Euclidean norm was developed, which requires knowledge of only those vertices of the set which are adjacent to a current reference vertex.


Journal ArticleDOI
TL;DR: Using the configuration centroid operator as the unperturbed Hamiltonian is found to yield a residual interaction with minimal Euclidean norm in this paper, where the effective interaction is expanded in terms of orthogonal polynomials appropriate to the configuration density.

Journal ArticleDOI
TL;DR: In this paper, an infinite set of independent covariant local non-polynomial continuity equations for the classical non-linear chiral On-models in two-dimensional Euclidean space is derived.
Abstract: We derive an infinite set of independent covariant local non-polynomial continuity equations for the classical non-linear chiral On-models in two-dimensional Euclidean space.



Journal ArticleDOI
TL;DR: A model is put forward to demonstrate that subjects need not combine dimensions directly in order to generate dissimilarity judgments that are very close to either the city-block or Euclidean metrics.
Abstract: Carvellas and Schneider [J. Acoust. Soc. Am. 51, 1839–1848 (1972)], using magnitude estimates of the dissimilarity of sinusoidal tones in a multidimensional scaling program, found that their data fit a city‐block and a Euclidean metric equally well, but other research has indicated that subjects do not combine dimensions of sinusoidal tones to arrive at an overall estimate of stimulus similarity. Yet multidimensional scaling seems to require subjects to combine dimensions of similarity. In this letter a model is put forward to demonstrate that subjects need not combine dimensions directly in order to generate dissimilarity judgments that are very close to either the city‐block or Euclidean metrics. Thus it is possible that while subjects do not combine the dimensions perceptually they can adopt a cognitive strategy that leads one to believe they do.

Journal ArticleDOI
TL;DR: In this article, Pettofrezzo and Zuckerman introduce the theory of number theory and its history, and present an overview of the history of the theory and history of numbers.
Abstract: 1] D. M. Burton, Elementary Number Theory, Allyn and Bacon, Boston, 1976. (2] I. Niven and H. S. Zuckerman, An Introduction to the Theory of Numbers, Wiley, New York, 1966. (3] 0. Ore, Number Theory and Its History, McGraw-Hill, New York, 1948. (4] A. J. Pettofrezzo and D. R. Byrkit, Elements of Number Theory, Prentice-Hall, Englewood Cliffs, N.J., 1970. (5] B. M. Stewart, Theory of Numbers, Macmillan, Boston, 1964.



Journal ArticleDOI
TL;DR: In this paper, the authors derived the Feynman rules for the complex phi 4 theory using the Euclidean path integral formulation, which is the basis for the present paper.
Abstract: The author derives the Feynman rules for the complex phi 4 theory using the Euclidean path integral formulation.