scispace - formally typeset
Search or ask a question

Showing papers on "Metric (mathematics) published in 1989"


Journal ArticleDOI
01 Jan 1989
TL;DR: Experiments in which distance is applied to pairs of concepts and to sets of concepts in a hierarchical knowledge base show the power of hierarchical relations in representing information about the conceptual distance between concepts.
Abstract: Motivated by the properties of spreading activation and conceptual distance, the authors propose a metric, called distance, on the power set of nodes in a semantic net. Distance is the average minimum path length over all pairwise combinations of nodes between two subsets of nodes. Distance can be successfully used to assess the conceptual distance between sets of concepts when used on a semantic net of hierarchical relations. When other kinds of relationships, like 'cause', are used, distance must be amended but then can again be effective. The judgements of distance significantly correlate with the distance judgements that people make and help to determine whether one semantic net is better or worse than another. The authors focus on the mathematical characteristics of distance that presents novel cases and interpretations. Experiments in which distance is applied to pairs of concepts and to sets of concepts in a hierarchical knowledge base show the power of hierarchical relations in representing information about the conceptual distance between concepts. >

1,962 citations


Book
27 Oct 1989
TL;DR: The geometry of spaces with an indefinite metric fundamental classes of operators in spaces with a semi-definite subspaces spectral topics and some applications theory of extensions of isometric and symmetric operators are discussed in this article.
Abstract: The geometry of spaces with an indefinite metric fundamental classes of operators in spaces with an indefinite metric invariant semi-definite subspaces spectral topics and some applications theory of extensions of isometric and symmetric operators in spaces with an indefinite metric.

523 citations



Proceedings ArticleDOI
05 Nov 1989
TL;DR: It is demonstrated that the ratio cut algorithm can locate the clustering structures in the circuit and as much as 70% improvement over the Kernighan-Lin algorithm in terms of the proposed ratio metric.
Abstract: A partitioning approach called ratio cut is proposed. The authors demonstrate that the ratio cut algorithm can locate the clustering structures in the circuit. Finding the optimal ratio cut is NP-complete. However, in certain cases the ratio cut can be solved by linear programming techniques via the multicommodity flow problem. They also propose a fast heuristic algorithm running in linear time with respect to the number of pins in the circuit. Experiments show good results in all tested cases, and as much as 70% improvement over the Kernighan-Lin algorithm in terms of the proposed ratio metric. >

221 citations


Journal ArticleDOI
TL;DR: For positive two-dimensional matrices, Hilbert's projective metric and a theorem of G. Birkhoff are used to prove that Sinkhorn's original iterative procedure converges geometrically; the ratio of convergence is estimated from the given data as discussed by the authors.

209 citations


Proceedings ArticleDOI
01 Aug 1989
TL;DR: The ARPANET routing metric was revised in July 1987, resulting in substantial performance improvements, especially in terms of user delay and effective network capacity, and a move away from the strict delay metric.
Abstract: The ARPANET routing metric was revised in July 1987, resulting in substantial performance improvements, especially in terms of user delay and effective network capacity. These revisions only affect the individual link costs (or metrics) on which the PSN (packet switching node) bases its routing decisions. They do not affect the SPF (“shortest path first”) algorithm employed to compute routes (installed in May 1979). The previous link metric was packet delay averaged over a ten second interval, which performed effectively under light-to-moderate traffic conditions. However, in heavily loaded networks it led to routing instabilities and wasted link and processor bandwidth.The revised metric constitutes a move away from the strict delay metric: it acts similar to a delay-based metric under lightly loads and to a capacity-based metric under heavy loads. It will not always result in shortest-delay paths. Since the delay metric produced shortest-delay paths only under conditions of light loading, the revised metric involves giving up the guarantee of shortest-delay paths under light traffic conditions for the sake of vastly improved performance under heavy traffic conditions.

198 citations


Journal ArticleDOI
TL;DR: A fast, combined solution to the fundamental problems in molecular evolution and in the analysis of homologous sequences are alignment, phylogeny reconstruction, and the reconstruction of ancestral sequences, with a novel feature is the introduction of the concept of sequence graphs.
Abstract: Among the fundamental problems in molecular evolution and in the analysis of homologous sequences are alignment, phylogeny reconstruction, and the reconstruction of ancestral sequences. This paper presents a fast, combined solution to these problems. The new algorithm gives an approximation to the minimal history in terms of a distance function on sequences. The distance function on sequences is a minimal weighted path length constructed from substitutions and insertionsdeletions of segments of any length. Substitutions are weighted with an arbitrary metric on the set of nucleotides or amino acids, and indels are weighted with a gap penalty function of the form gk = a + (bXk), where k is the length of the indel and a and b are two positive numbers. A novel feature is the introduction of the concept of sequence graphs and a generalization of the traditional dynamic sequence comparison algorithm to the comparison of sequence graphs. Sequence graphs ease several computational problems. They are used to represent large sets of sequences that can then be compared simultaneously. Furthermore, they allow the handling of multiple, equally good, alignments, where previous methods were forced to make arbitrary choices. A program written in C implemented this method; it was tested first on 22 5s RNA sequences.

192 citations


Journal ArticleDOI
TL;DR: The use of two's complement arithmetic is proposed as an alternative to the rescaling method, which avoids any kind of rescaling subtractions within the metric update loop.
Abstract: In the Viterbi algorithm, the negative log-likelihood estimates, accumulated distances, or path metrics are unboundedly increasing functions of time. For implementation, all variables must be confined to a finite range. The following properties of the Viterbi algorithm can be exploited for this purpose: (1) path selection depends only on differences of metrics; an (2) the difference between metrics is bounded. In the rescaling scheme, at each iteration the minimum metric is subtracted from all metrics. The use of two's complement arithmetic is proposed as an alternative to the rescaling method. This scheme avoids any kind of rescaling subtractions. Obvious advantages in implementation are hardware savings and a speedup inside the metric update loop, which is critical to the decoder's computational throughput. >

189 citations


Journal ArticleDOI
TL;DR: In this article, les proprietes de regularite metrique, d'ouverture et de comportement lipschitzien de fonctions multiformes, considere les proprietés de regularité metrique.
Abstract: On considere les proprietes de regularite metrique, d'ouverture et de comportement lipschitzien de fonctions multiformes

150 citations


Journal ArticleDOI
15 Jan 1989-Genome
TL;DR: The mutation–selection balance, genetic variability, continuum-of-alleles model, house- of-cards approximation is reviewed, with the intention of simplifying it as much as possible.
Abstract: Metric characters closely connected with fitness have little additive genetic variability, presumably because it is quickly exhausted under continuous directional selection on fitness. Other metric...

104 citations


Journal ArticleDOI
TL;DR: This paper considers two planar facility location problems while employing the Manhattan travel metric, and establishes that the search for an optimal solution can be restricted to a finite set of easily identifiable points.
Abstract: This paper considers two planar facility location problems while employing the Manhattan travel metric. We first consider the p-median problem in the presence of arbitrarily shaped barriers and convex forbidden regions. For this problem we establish that the search for an optimal solution can be restricted to a finite set of easily identifiable points. Next, we consider the stochastic queue median problem in the presence of arbitrarily shaped barriers. A procedure to obtain a global optimum solution for this problem is established. The results of the paper are illustrated via numerical examples. Finally, we comment on a connection between network location problems and planar location problems which use the Manhattan travel metric.


Proceedings ArticleDOI
14 May 1989
TL;DR: The author describes an algorithm that takes two polygons as input, and computes a representation of their corresponding configuration-space obstacle, including contact information, which is exact up to the limits of floating-point arithmetic.
Abstract: The author describes an algorithm that takes two polygons as input, and computes a representation of their corresponding configuration-space obstacle, including contact information. The algorithm's output includes a full metric and topological description of the obstacle surface, as well as the set of polygon features that are in contact for each point of the surface. The representation is exact, up to the limits of floating-point arithmetic. The algorithm has been implemented and test-run on over 40 input pairs; run times varied between 12 and 135 s. >

Journal ArticleDOI
TL;DR: In this paper, the authors study flat and nonflat static models obeying the Brans-Dicke theory and RW metric, including the case which considers Bertolami's time-dependent cosmological term.
Abstract: We study flat and nonflat static models obeying the Brans-Dicke theory and RW metric, including the case which considers Bertolami's time-dependent cosmological term. We find several solutions where the density remains constant, while the gravitational constant varies with time.

Journal ArticleDOI
TL;DR: A metric based on pershapes is introduced that provides a quantitative way of measuring how similar two machines are in terms of their performance distributions.
Abstract: Measurements are presented for a large number of machines ranging from small workstations to supercomputers. The authors combine these measurements into groups of parameters which relate to specific aspects of the machine implementation, and use these groups to provide overall machine characterizations. The authors also define the concept of pershapes, which represent the level of performance of a machine for different types of computation. A metric based on pershapes is introduced that provides a quantitative way of measuring how similar two machines are in terms of their performance distributions. The metric is related to the extent to which pairs of machines have varying relative performance levels depending on which benchmark is used. >

Journal ArticleDOI
TL;DR: Calculations with a metric matrix distance geometry algorithm were performed that show that the standard implementation of the algorithm generally samples a very limited region of conformational space and has a tendency to consistently produce similar structures instead of sampling all structures consistent with the input distance information.
Abstract: Calculations with a metric matrix distance geometry algorithm were performed that show that the standard implementation of the algorithm generally samples a very limited region of conformational space. This problem is most severe when only a small amount of distance information is used as input for the algorithm. Control calculations were performed on linear peptides, disulfide-linked peptides, and a double-stranded DNA decamer where only distances defining the covalent structures of the molecules (as well as the hydrogen bonds for the base pairs in the DNA) were included as input. Since the distance geometry algorithm is commonly used to generate structures of biopolymers from distance data obtained from NMR experiments, simulations were performed on the small globular protein basic pancreatic trypsin inhibitor (BPTI) that mimic calculations performed with actual NMR data. The results on BPTI and on the control peptides indicate that the standard implementation of the algorithm has two main problems: first, that it generates extended structures; second, that it has a tendency to consistently produce similar structures instead of sampling all structures consistent with the input distance information. These results also show that use of a simple root-mean-square deviation for evaluating the quality of the structures generated from NMR data may not be generally appropriate. The main sources of these problems are identified, and our results indicate that the problems are not a fundamental property of the distance geometry algorithm but arise from the implementations presently used to generate structures from NMR data. Several possible methods for alleviating these problems are discussed.


Journal ArticleDOI
TL;DR: In this paper, the four-loop (O( α ) 4 ) metric β-function for the two-dimensional bosonic non-linear σ-model was computed using an indirect method, which avoids a large part of the explicit calculation and as a byproduct yields the corresponding 4-loop low-energy sring effective action.

Journal ArticleDOI
TL;DR: In this article, the authors quantify the information loss incurred by categorizing an unobserved continuous variable X into an ordered categorical scale Z, and show that the loss of information by marketing researchers' ad hoc use of Z as opposed to the more refined X is small <10% when the Z scale is well designed with at least five categories.
Abstract: We quantify the information loss incurred by categorizing an unobserved continuous variable X into an ordered categorical scale Z. The continuous variable is conceptualized as true score τ which varies across individuals plus random error e, with both components assumed to be normally distributed. The index of metric quality is operationalized as r2Z, τ/r2X, τ, where r2, the squared correlation coefficient, is a descriptive measure of the power of X or Z to predict τ. The index is useful in defining limits on explanatory power population R2 in multiple regression models in which an ordered categorical variable is regressed against a set of predictors. The index can also be used to correct correlations for the effects of ordered categorical measurement. The index of metric quality is extended to the case when several ordered categorical scales are averaged as in the multi-item measurement of a construct. We prove theoretically that as long as the error variance is “large,” the index of metric quality for the average ZI„ of ordered categorical scales goes to 1 as the number of scales becomes “large.” The index for averaged data is useful in answering questions such as whether the measurement of a construct by averaging three 5-point scales is better or worse than the measurement obtained by averaging five 3-point scales. The results indicate that the loss of information by marketing researchers' ad hoc use of Z as opposed to the more refined X is small <10% when the Z scale is well designed with at least five categories. The loss would be even smaller when a multi-item based ZI„ is employed.

Journal ArticleDOI
TL;DR: In this paper, a perturbative scheme to analyze the evolution of both metric and scalar field perturbations in an expanding universe is developed, which is applied to study chaotic inflation with initial metric and Scalar Field perturbation present.

Journal ArticleDOI
TL;DR: An early algorithm by Hanan is shown to have anO(n logn) time implementation using computational geometry techniques, and an extensive review of proposed heuristics is given.
Abstract: A fundamental problem in circuit design is how to connectn points in the plane, to make them electrically common using the least amount of wire. The tree formed, a Steiner tree, is usually constructed with respect to the rectilinear metric. The problem is known to be NP-complete; an extensive review of proposed heuristics is given. An early algorithm by Hanan is shown to have anO(n logn) time implementation using computational geometry techniques. The algorithm can be modified to do sequential searching inO(n2) total time. However, it is shown that the latter approach runs inO(n3/2) expected time, forn points selected from anm×m grid. Empirical results are presented for problems up to 10,000 points.

Journal ArticleDOI
TL;DR: In this paper, a unified statistical and phenomenological approach to geometrization of classical thermodynamics is proposed, where any r -parameter probability distribution function leads to a Riemannian metric of the parameter space with components of the metric tensor represented by fluctuations of the associated stochastic variables.

Proceedings Article
01 Dec 1989
TL;DR: In this paper, the authors extend network-based methods of constraint satisfaction to include continuous variables, thus providing a framework for processing temporal constraints, and present algorithms for performing the following reasoning tasks: finding all feasible times that a given event can occur, finding all possible relationships between two given events and generating one or more scenarios consistent with the information provided.
Abstract: This paper extends network-based methods of constraint satisfaction to include continuous variables, thus providing a framework for processing temporal constraints. In this framework, called temporal constraint satisfaction problem (TCSP), variables represent time points and temporal information is represented by a set of unary and binary constraints, each specifying a set of permitted intervals. The unique feature of this framework lies in permitting the processing of metric information, namely, assessments of time differences between events. We present algorithms for performing the following reasoning tasks: finding all feasible times that a given event can occur, finding all possible relationships between two given events, and generating one or more scenarios consistent with the information provided. We distinguish between simple temporal problems (STPs) and general temporal problems, the former admitting at most one interval constraint on any pair of time points. We show that the STP, which subsumes the major part of Vilain and Kautz's point algebra, can be solved in polynomial time. For general TCSPs, we present a decomposition scheme that performs the three reasoning tasks considered, and introduce a variety of techniques for improving its efficiency. We also study the applicability of path consistency algorithms as preprocessing of temporal problems, demonstrate their termination and bound their complexities.

Journal ArticleDOI
TL;DR: In this article, the existence of stationary optimal policies for general semi-Markov models with possibly unbounded rewards is proved and the corresponding dynamic programming equations are also derived, and a synthesis and extensions of earlier results are presented.

Journal ArticleDOI
TL;DR: In this article, the relationship between a class of distances and infinitesimal metrics on real and complex manifolds and their behavior under differentiable and holomorphic mappings is studied.
Abstract: In this paper we study the relationships between a class of distances and infinitesimal metrics on real and complex manifolds and their behavior under differentiable and holomorphic mappings. Some application to Riemannian and Finsler geometry are given and also new proofs and generalizations of some results of Royden, Harris and Reiffen on Kobayashi and Caratheodory metrics on complex manifolds are obtained. In particular we prove that on every complex manifold (finite or infinite- dimensional) the Kobayashi distance is the integrated form of the corresponding infinitesimal metric.

Book ChapterDOI
01 Jan 1989
TL;DR: This chapter discusses metric analysis further, with special attention to populations of neurons, and considers the variation between and within animals to estimate the number of animals and the numberof neurons required for a statistical comparison.
Abstract: The neuronal tree alters during development and aging and in several cases during disease and experimental treatment (e.g., Mrzljak et al., 1988; Coleman and Flood, 1987; De Ruiter and Uylings, 1987). In quantitative assessment, metric and topological analysis offer us important tools for estimating the type and size of these alterations. Topological analysis, dealing with the number of branchings and connectivity pattern of segments but not the physical size of individual neurons, has been dealt with in Chapter 10. Metric analysis, referring to neuronal size of individual neurons, has been dealt with in Chapter 9 of this book. In this chapter we discuss metric analysis further, with special attention to populations of neurons, and consider the variation between and within animals to estimate the number of animals and the number of neurons required for a statistical comparison.

Journal ArticleDOI
TL;DR: In this article, the general expression of the metric and almostproduct structure in normal coordinates for para-Kaehlerian manifolds of constant para-holomorphic sectional curvature was studied.
Abstract: We consider the sectional curvatures for metric (J4 = 1)-manifolds, and study particularly the general expression of the metric and almostproduct structure in normal coordinates for para-Kaehlerian manifolds of constant para-holomorphic sectional curvature. We also introduce models of such spaces.

Patent
13 Oct 1989
TL;DR: In this paper, a fuzzy data comparator receives a digital data bit stream and compares each frame thereof with multiple sets of differing known data stored in a plurality of pattern memories, using a selected comparison metric.
Abstract: A fuzzy data comparator receives a fuzzy data digital data bit stream and compares each frame thereof with multiple sets of differing known data stored in a plurality of pattern memories, using a selected comparison metric. The results of the comparisons are accumulated as error values. A first neural postprocessing network ranks error values less than a preselected threshold. A second neural network receives the first neural network solutions and provides an expansion bus for interconnecting to additional comparators.

Journal Article
TL;DR: In this article, Andrée C. Ehresmann et al. implique l'accord avec les conditions générales d'utilisation (http://www.numdam.org/legal.php).
Abstract: © Andrée C. Ehresmann et les auteurs, 1989, tous droits réservés. L’accès aux archives de la revue « Cahiers de topologie et géométrie différentielle catégoriques » implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/legal.php). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright.

Journal ArticleDOI
TL;DR: In this article, the authors trace the non-renormalizability of quantum gravity to a mismatch between the symmetries of its quadratic and cubic terms, which makes this ostensibly renormalizable system ill-defined about zero vacuum, and forces the usual expansion of the metric about a background.