scispace - formally typeset
Search or ask a question

Showing papers on "Metric (mathematics) published in 1991"


Journal ArticleDOI
TL;DR: It is shown that the STP, which subsumes the major part of Vilain and Kautz's point algebra, can be solved in polynomial time and the applicability of path consistency algorithms as preprocessing of temporal problems is studied, to demonstrate their termination and bound their complexities.

1,989 citations


Journal ArticleDOI
TL;DR: This is a method for determining numerically local minima of differentiable functions of several variables by suitable choice of starting values, and without modification of the procedure, linear constraints can be imposed upon the variables.
Abstract: This is a method for determining numerically local minima of differentiable functions of several variables. In the process of locating each minimum, a matrix which characterizes the behavior of the function about the minimum is determined. For a region in which the function depends quadratically on the variables, no more than N iterations are required, where N is the number of variables. By suitable choice of starting values, and without modification of the procedure, linear constraints can be imposed upon the variables.

1,010 citations


Journal ArticleDOI
TL;DR: Divide-and-conquer search strategies are described for satisfying proximity queries involving arbitrary distance metrics involving arbitrarydistance metrics.

740 citations


Book ChapterDOI
01 Oct 1991
TL;DR: In this paper, the authors discuss aspects of this incipient general theory which are most closely related to topics of current interest in theoretical stochastic processes, aimed at theoretical probabilists.
Abstract: INTRODUCTION Many different models of random trees have arisen in a variety of applied setting, and there is a large but scattered literature on exact and asymptotic results for particular models. For several years I have been interested in what kinds of “general theory” (as opposed to ad hoc analysis of particular models) might be useful in studying asymptotics of random trees. In this paper, aimed at theoretical probabilists, I discuss aspects of this incipient general theory which are most closely related to topics of current interest in theoretical stochastic processes. No prior knowledge of this subject is assumed: the paper is intended as an introduction and survey. To give the really big picture in a paragraph, consider a tree on n vertices. View the vertices as points in abstract (rather than d -dimensional) space, but let the edges have length (= 1, as a default) so that there is metric structure: the distance between two vertices is the length of the path between them. Consider the average distance between pairs of vertices. As n → ∞ this average distance could stay bounded or could grow as order n , but almost all natural random trees fall into one of two categories. In the first (and larger) category, the average distance grows as order logn. This category includes supercritical branching processes, and most “Markovian growth” models such as those occurring in the analysis of algorithms. This paper is concerned with the second category, in which the average distance grows as order n ½ .

507 citations


Journal ArticleDOI
TL;DR: In this article, a local renormalisation group equation which realises infinitesimal Weyl rescalings of the metric and which is an extension of the usual Callan-Symanzik equation is described.

341 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the critical points at infinity of the variational problem, in which the failure of the Palais-Smale condition is the main obstacle for solving equations of type (4).

317 citations


Proceedings Article
14 Jul 1991
TL;DR: In this article, a general model for temporal reasoning, capable of handling both qualitative and quantitative information, is presented, which allows the representation and processing of all types of constraints considered in the literature so far, including metric constraints, and qualitative, disjunctive, constraints (specifying the relative position between temporal objects).
Abstract: This paper presents a general model for temporal reasoning, capable of handling both qualitative and quantitative information. This model allows the representation and processing of all types of constraints considered in the literature so far, including metric constraints (restricting the distance between time points), and qualitative, disjunctive, constraints (specifying the relative position between temporal objects). Reasoning tasks in this unified framework are formulated as constraint satisfaction problems, and are solved by traditional constraint satisfaction techniques, such as backtracking and path consistency. A new class of tractable problems is characterized, involving qualitative networks augmented by quantitative domain constraints, some of which can be solved in polynomial time using arc and path consistency.

312 citations


Journal ArticleDOI
TL;DR: A simple transformation of the metric is investigated whereby the cyclomatic complexity is divided by the size of the system in source statements, thereby determining a complexity density ratio, which is demonstrated to be a useful predictor of software maintenance productivity on a small pilot sample of maintenance projects.
Abstract: A study of the relationship between the cyclomatic complexity metric (T. McCabe, 1976) and software maintenance productivity, given that a metric that measures complexity should prove to be a useful predictor of maintenance costs, is reported. The cyclomatic complexity metric is a measure of the maximum number of linearly independent circuits in a program control graph. The current research validates previously raised concerns about the metric on a new data set. However, a simple transformation of the metric is investigated whereby the cyclomatic complexity is divided by the size of the system in source statements. thereby determining a complexity density ratio. This complexity density ratio is demonstrated to be a useful predictor of software maintenance productivity on a small pilot sample of maintenance projects. >

225 citations


Journal ArticleDOI
01 Mar 1991
TL;DR: In this paper, the functional determinants of the conformal Laplacian (Yamabe operator) and the square of the Dirac operator are analyzed. But the analysis is restricted to conformal class of Riemannian, locally symmetric, Einstein metric on a compact 4-manifold.
Abstract: 4 2 2 ABSTRACT. Working on the four-sphere S , a flat four-torus, S x S2, or a compact hyperbolic space, with a metric which is an arbitrary positive function times the standard one, we give explicit formulas for the functional determinants of the conformal Laplacian (Yamabe operator) and the square of the Dirac operator, and discuss qualitative features of the resulting variational problems. Our analysis actually applies in the conformal class of any Riemannian, locally symmetric, Einstein metric on a compact 4-manifold; and to any geometric differential operator which has positive definite leading symbol, and is a positive integral power of a conformally covariant operator.

194 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the equations of general relativity remain well defined even in the limit that the metric becomes degenerate, and that there exist smooth solutions to these equations on manifolds in which the topology of space changes.
Abstract: In a first-order formulation, the equations of general relativity remain well defined even in the limit that the metric becomes degenerate. It is shown that there exist smooth solutions to these equations on manifolds in which the topology of space changes. The metric becomes degenerate on a set of measure zero, but the curvature remains bounded. Thus if degenerate metrics play any role in quantum gravity, topology change is unavoidable.

179 citations


Journal ArticleDOI
TL;DR: In this article, a new method for generation of solution adaptive grids based on harmonic maps on Riemannian manifolds is described, which is assured by an existence and uniqueness theorem for one-to-one maps between multidimensional multiconnected domains.

Journal ArticleDOI
01 Dec 1991
TL;DR: Theoretical and experimental results show that the most commonly used performance metric, parallel speedup, is 'unfair', in that it favors slow processors and poorly coded programs.
Abstract: The traditional definition of 'speedup' as the ratio of sequential execution time to parallel execution time has been widely accepted. One drawback to this metric is that it tends to reward slower processors and inefficient compilation with higher speedup. It seems unfair that the goals of high speed and high speedup are at odds with each other. In this paper, the 'fairness' of parallel performance metrics is studied. Theoretical and experimental results show that the most commonly used performance metric, parallel speedup, is 'unfair', in that it favors slow processors and poorly coded programs. Two new performance metrics are introduced. The first one, sizeup, provides a 'fair' performance measurement. The second one is a generalization of speedup - the generalized speedup, which recognizes that speedup is the ratio of speeds, not times. The relation between sizeup, speedup, and generalized speedup are studied. The various metrics have been tested using a real application that runs on an nCUBE 2 multicomputer. The experimental results closely match the analytical results.

Patent
Kazuhiro Okanoue1
01 Apr 1991
TL;DR: In this paper, a branch metric is obtained by summing branch metric coefficients derived from channel estimates respectively with the output of the matched filters or by summifying branch metric coefficient derived from a vector sum of channel estimates with the matched filter outputs.
Abstract: In a space diversity receiver, matched filters and a like number of channel estimators are respectively coupled to diversity antennas to receive sequentially coded symbol sequences. A branch metric calculator receives the outputs of the matched filters and the estimates from the channel estimators to calculate a branch metric of the received sequences for coupling to a maximum likelihood (ML) estimator. The branch metric is obtained by summing branch metric coefficients derived from channel estimates respectively with the output of the matched filters or by summing branch metric coefficients derived from a vector sum of channel estimates with the matched filter outputs. In another embodiment, adaptive channel estimators are provided for deriving channel estimates from received sequences and the output of an ML estimator. First branch metrics are derived from the received sequences and supplied to a branch metric quality estimator in which quality estimates of the channels are derived from the first branch metrics. An evaluation circuit evaluates the first branch metrics according to the quality estimates and produces a second branch metric for coupling to the ML estimator.

Proceedings ArticleDOI
09 Apr 1991
TL;DR: The author explains how an appropriate distance metric or measure of similarity can be found, and how the distance metric is used in the use of locally weighted regression in memory-based robot learning.
Abstract: The use of locally weighted regression in memory-based robot learning is explored. A local model is formed to answer each query, using a weighted regression in which close points (similar experiences) are weighted more than distant points (less relevant experiences). This approach implements a philosophy of modeling a complex function with many simple local models. The author explains how an appropriate distance metric or measure of similarity can be found, and how the distance metric is used. How irrelevant input variables and terms in the local model are detected is also explained. An example from the control of a robot arm is used to compare this approach with other robot control and learning techniques. >

Journal ArticleDOI
TL;DR: An efficient, non-iterative polynomial time approximation algorithm which minimizes the proximal uniformity cost function and establishes correspondence over n frames and combines the qualities of the gradient and token based methos for motion correspondence is proposed.
Abstract: Given n frames taken at different time instants and m points in each frame, the problem of motion correspondence is to map a point in one frame to another point in the next frame such that no two points map onto the same point. This problem is combinatorially explosive; one needs to introduce constraints to limit the search space. We propose a proximal uniformity constraint to solve the correspondence problem. According to this constraint, most objects in the real world follow smooth paths and cover a small distance in a small time. Therefore, given a location of a point in a frame, its location in the next frame lies in the proximity of its previous location. Further, resulting trajectories are smooth and uniform and do not show abrupt changes in velocity vector over time. An efficient, non-iterative polynomial time approximation algorithm which minimizes the proximal uniformity cost function and establishes correspondence over n frames is proposed. It is argued that any method using smoothness of motion alone cannot operate correctly without assuming correct initial correspondence, the correspondence in the first two frames. Therefore, we propose the use of gradient based optical flow for establishing the initial correspondence. This way the proposed approach combines the qualities of the gradient and token based methos for motion correspondence. The algorithm is then extended to take care of restricted cases of occlusion. A metric called distortion measure for measuring the goodness of solution to this n frame correspondence problem is also proposed. The experimental results for real and synthetic sequences are presented to support our claims.

Journal ArticleDOI
TL;DR: In this article, the authors define the kind of person that will need some linear operators in spaces with an indefinite metric reference, i.e., people with open minded will always try to seek for the new things and information from many sources.
Abstract: Well, someone can decide by themselves what they want to do and need to do but sometimes, that kind of person will need some linear operators in spaces with an indefinite metric references. People with open minded will always try to seek for the new things and information from many sources. On the contrary, people with closed mind will always think that they can do it by their principals. So, what kind of person are you?

Journal ArticleDOI
TL;DR: An iterative scheme for improving camera calibration based on results is derived, and performance demonstrated on real data.


Journal ArticleDOI
TL;DR: In this paper, the validity of the classical Peano theorem is noted for differential equations on the metric space (En, dp) of normal fuzzy convex sets in Rn, where dp is the p-th mean of the Hausdorff distances between corresponding level sets.

Journal ArticleDOI
TL;DR: In this article, a metric of perceived lightness of object colors, L**, is derived and evaluated, based on CIELAB parameters, which is designed to predict the Helmhltz-Kohlrausch effect whereby chromatic object colors appear lighter than achromatic object of some luminance factor.
Abstract: A metric of perceive lightness of object colors, L**, is derived and evaluated. This metric, based on CIELAB parameters, is designed to predict the Helmhltz-Kohlrausch effect whereby chromatic object colors appear lighter than achromatic object of the some luminance factor. Models were fitted to published data and then tested with results from a new lightness-matching experiment that is described in this article. The L** metric predicts perceived lightness to within the interobserver variability and provides a simple, practical measure of this appearance attribute.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the scalar field on 2-dimensional manifolds whose metric changes signature and which admit a spacelike isometry, and choose the wave equation so that there will be a conserved Klein-Gordon product implicitly determines the junction conditions one needs to impose in order to obtain global solutions.
Abstract: We consider the (massless) scalar field on 2-dimensional manifolds whose metric changes signature and which admit a spacelike isometry. Choosing the wave equation so that there will be a conserved Klein-Gordon product implicitly determines the junction conditions one needs to impose in order to obtain global solutions. The resulting mix of positive and negative frequencies produced by the presence of Euclidean regions depends only on the total width of the regions, and not on the detailed form of the metric.

Book ChapterDOI
01 Feb 1991
TL;DR: A natural definition of the distance between curves is given and algorithms to calculate this distance between two polygonal chains in the d-dimensional space for arbitrary d are given.
Abstract: The often explored problem to approximate a given polygonal chain has been considered from a computational geometric point of view only for a short time. To model it reasonably we give a natural definition of the distance between curves. Furthermore we give algorithms to calculate this distance between two polygonal chains in the d-dimensional space for arbitrary d. With known methods this yields polynomial time algorithms to approximate polygonal chains. These algorithms find an optimal solution under some constraints. We will show that this solution is only by a constant factor worse than the global optimum.

Proceedings ArticleDOI
14 Apr 1991
TL;DR: Using the EM algorithm, the authors restore a blurred image and quantify the improvement in image quality with both the new metric and the mean square error (MSE).
Abstract: A new image quality metric consistent with the properties of the human visual system is derived. Using the EM algorithm, the authors restore a blurred image and quantify the improvement in image quality with both the new metric and the mean square error (MSE). From these results, the advantages of the new metric are obvious. The EM algorithm is modified according to the underlying mathematical structure of the new metric, which results in improved performance. >

Patent
17 Dec 1991
TL;DR: In this article, a metric conversion operator is executed upon a relatively low spatial resolution file, thereby resulting in a substantial reduction in processing overhead, as advantage is taken of the availability of the reduced size of the lower spatial resolution base file within the hierarchical database, so that metric conversion may be performed on the relatively small number of pixels within the base file, prior to up-converting the image to a relatively high spatial resolution image, such as a 2048 X 3072 pixel image for driving a high resolution digital thermal color printer.
Abstract: A mechanism for facilitating metric conversion of digitized images intended for use with a multi-resolution, multi-application environment is integrated into the encoding and decoding mechanisms of the hierarchical database, such that stored residual image files contain metric change information. In a preferred embodiment of the invention, the metric conversion operator is executed upon a relatively low spatial resolution file, thereby resulting in a substantial reduction in processing overhead, as advantage is taken of the availability of the reduced size of the lower spatial resolution base file within the hierarchical database, so that a metric conversion may be performed on the relatively small number of pixels within the base file, prior to up-converting the image to a relatively high spatial resolution image, such as a 2048 X 3072 pixel image for driving a high resolution digital thermal color printer.

Journal ArticleDOI
TL;DR: It is proved that uniformity of a TCM scheme can also be defined under this new distance, and the results obtained are shown to hold for channels with phase offset or independent, amplitude-only fading.
Abstract: The class of uniform trellis-coded modulation (TCM) techniques is defined, and simple explicit conditions for uniformity are derived. Uniformity is shown to depend on the metric properties of the two subconstellations resulting from the first step in set partitioning, as well as on the assignment of binary labels to channel symbols. The uniform distance property and uniform error property, which are both derived from uniformity but are not equivalent, are discussed. The derived concepts are extended to encompass transmission over a (not necessarily Gaussian) memoryless channel in which the metric used for detection may not be maximum likelihood. An appropriate distance measure is defined that generalizes the Euclidean distance. It is proved that uniformity of a TCM scheme can also be defined under this new distance. The results obtained are shown to hold for channels with phase offset or independent, amplitude-only fading. Examples are included to illustrate the applicability of the results. >

Journal ArticleDOI
01 Oct 1991-Networks
TL;DR: The role of L1-embeddability in the feasibility problem for multi-commodity flows is described and the Ford and Fulkerson theorem for the existence of a single commodity flow can be restated as an inequality that must be valid for all cut metrics.
Abstract: A finite metric (or more properly semimetric) on n points is a nonnegative vector d = (dij) 1 ⩽ i < j ⩽ n that satisfies the triangle inequality dij ⩽ dik + djk. The L1 (or Manhattan) distance ‖x − y‖1 between two vectors x = (xi) and y = (yi) in Rm is given by ‖x − y‖1 = ∑1⩽i⩽m |xi − yi|. A metric d is L1-embeddable if there exist vectors z1, z2,…, zn in Rm for some m, such that dij = ‖zi − zj‖1 for 1 ⩽ i < j ⩽ n. A cut metric is a metric with all distances zero or one and corresponds to the incidence vector of a cut in the complete graph on n vertices. The cut cone Hn is the convex cone formed by taking all nonnegative combinations of cut metrics. It is easily shown that a metric is L1-embeddable if and only if it is contained in the cut cone. In this expository paper, we provide a unified setting for describing a number of results related to L1-embeddability and the cut cone. We collect and describe results on the facial structure of the cut cone and the complexity of testing the L1-embeddability of a metric. One of the main sections of the paper describes the role of L1-embeddability in the feasibility problem for multi-commodity flows. The Ford and Fulkerson theorem for the existence of a single commodity flow can be restated as an inequality that must be valid for all cut metrics. A more general result, known as the Japanese theorem, gives a condition for the existence of a multicommodity flow. This theorem gives an inequality that must be satisfied by all metrics. For multicommodity flows involving a small number of terminals, it is known that the condition of the Japanese theorem can be replaced with one of the Ford–Fulkerson type. We review these results and show that the existence of such Ford–Fulkerson-type conditions for flows with few terminals depends critically on the fact that certain metrics are L1-embeddable.

01 Jan 1991
TL;DR: In this article, a metric for describing line segments is presented, which measures how well two line segments can be replaced by a single longer one, depending on collinearity and nearness of the line segments.
Abstract: This correspondence presents a metric for describing line segments. This metric measures how well two line segments can be replaced by a single longer one. This depends for example on collinearity and nearness of the line segments. The metric is constructed using a new technique using so-called neighborhood functions. The behavior of the metric depends on the neighborhood function chosen. In this correspondence, an appropriate choice for the case of line segments is presented. The quality of the metric is verified by using it in a simple clustering algorithm that groups line segments found by an edge detection algorithm in an image. The fact that the clustering algorithm can detect long linear structures in an image shows that the metric is a good measure for the groupability of line segments. >


Proceedings Article
14 Jul 1991
TL;DR: The formalization of the temporal distance model of Dechter, Meiri, and Pearl is extended and methods for using dates as reference intervals and for meeting the challenge of repeated activities, such as weekly recurring appointments are developed.
Abstract: Reasoning about one's personal schedule of appointments is a common but surprisingly complex activity. Motivated by the novel application of planning and temporal reasoning techniques to this problem, we have extended the formalization of the temporal distance model of Dechter, Meiri, and Pearl. We have developed methods for using dates as reference intervals and for meeting the challenge of repeated activities, such as weekly recurring appointments.

Journal ArticleDOI
TL;DR: A new algorithm for phase retrieval from the Fourier modulus is presented, which differs from the iterative transform algorithm in both the choice of error metric and the use of a conjugate gradient minimization research.
Abstract: A new algorithm for phase retrieval from the Fourier modulus is presented The technique differs from the iterative transform algorithm in both the choice of error metric and the use of a conjugate gradient minimization research Results are presented for noisy simulated data and also for measured photon-limited data