scispace - formally typeset
Search or ask a question

Showing papers on "Metric (mathematics) published in 1992"


Journal ArticleDOI
TL;DR: In this article, the authors establish a general criterion for a set of non-Hermitian operators to constitute a consistent quantum mechanical system, which allows for the normal quantum-mechanical interpretation.

734 citations


Journal ArticleDOI
TL;DR: A principal-components procedure was employed to reduce simple multicollinear complexity metrics to uncorrelated measures on orthogonal complexity domains to classify programs into alternate groups, depending on the metric values of the program.
Abstract: The use of the statistical technique of discriminant analysis as a tool for the detection of fault-prone programs is explored. A principal-components procedure was employed to reduce simple multicollinear complexity metrics to uncorrelated measures on orthogonal complexity domains. These uncorrelated measures were then used to classify programs into alternate groups, depending on the metric values of the program. The criterion variable for group determination was a quality measure of faults or changes made to the programs. The discriminant analysis was conducted on two distinct data sets from large commercial systems. The basic discriminant model was constructed from deliberately biased data to magnify differences in metric values between the discriminant groups. The technique was successful in classifying programs with a relatively low error rate. While the use of linear regression models has produced models of limited value, this procedure shows great promise for use in the detection of program modules with potential for faults. >

458 citations


Journal ArticleDOI
TL;DR: Goebel and W. A. Kirk as mentioned in this paper have published a book on Goebel's work, which is called "The Hidden World of Goebels" and is based on the GOEBEL algorithm.
Abstract: By Kazimierz Goebel and W. A. Kirk: 244 pp., £30.00, ISBN 0 521 38289 0 (Cambridge University Press, 1990).

295 citations


Journal ArticleDOI
01 Sep 1992
TL;DR: An approximation algorithm is given and it is shown that the worst-case ratio of the cost of the solutions to the optimal cost is better than previously known ratios in graphs, and in rectilinear metric on the plane.
Abstract: For a set S contained in a metric space, a Steiner tree of S is a tree that connects the points in S. Finding a minimum cost Steiner tree is an NP-hard problem in euclidean and rectilinear metrics as well as in graphs. We give an approximation algorithm and show that the worst-case ratio of the cost of our solutions to the optimal cost is better than previously known ratios in graphs, and in rectilinear metric on the plane. Our method offers a trade-off between the running time and the ratio; on one hand it always allows to improve the ratio, on the other it allows to obtain previously known ratios with much greater efficiency. We use properties of optimal rectilinear Steiner trees to obtain significantly better ratio and running time in rectilinear metric.

268 citations


Journal ArticleDOI
TL;DR: It is shown that, depending on the metrization protocol used, metric matrix distance geometry can have very good sampling properties'indeed, both for the unconstrained model system and the NMR-structure case.
Abstract: In this paper, we present a reassessment of the sampling properties of the metric matrix distance geometry algorithm, which is in wide-spread use in the determination of three-dimensional structures from nuclear magnetic resonance (NMR) data. To this end, we compare the conformational space sampled by structures generated with a variety of metric matrix distance geometry protocols. As test systems we use an unconstrained polypeptide, and a small protein (rabbit neutrophil defensin peptide 5) for which only few tertiary distances had been derived from the NMR data, allowing several possible folds of the polypeptide chain. A process called ‘metrization’ in the preparation of a trial distance matrix has a very large effect on the sampling properties of the algorithm. It is shown that, depending on the metrization protocol used, metric matrix distance geometry can have very good sampling properties'indeed, both for the unconstrained model system and the NMR-structure case. We show that the sampling properties are to a great degree determined by the way in which the first few distances are chosen within their bounds. Further, we present a new protocol (‘partial metrization’) that is computationally more efficient but has the same excellent sampling properties. This novel protocol has been implemented in an expanded new release of the program X-PLOR with distance geometry capabilities.

201 citations


Journal ArticleDOI
TL;DR: This paper presents approximation algorithms for median problems in metric spaces and fixed-dimensional Euclidean space that use a new method for transforming an optimal solution of the linear program relaxation of the s-median problem into a provably good integral solution.

191 citations


Journal ArticleDOI
TL;DR: The packing and covering problem for the metric space J3J consisting of g-ary words of length n and provided with the deletion and insertion metric is considered and partitions of Β ζ and of the permutation set Sn into perfect codes capable of correcting single deletions are given.
Abstract: The packing and covering problem for the metric space J3J consisting of g-ary words of length n and provided with the deletion and insertion metric is considered. For any n = 1,2,... partitions of Β ζ and of the permutation set Sn (Sn C B%) into perfect codes capable of correcting single deletions are given. In connection with the problem of finding perfect codes capable of correcting s deletions a problem of constructing .ordered Steiner systems is stated, and a solution of this problem for some values of parameters is presented. For n = 3 and any q as well as for n = 4 and any even q, perfect in B* codes capable of correcting single deletions are constructed. An asymptotic behaviour of the maximum cardinality of a code in B" capable of correcting single deletions in case q/n —> oo is found .

133 citations


Journal ArticleDOI
TL;DR: The theory of spaces with (α, β)-metric has been developed into a fruitful branch of Finsler geometry as discussed by the authors, and the present paper consists of the comprehensive collection of the results on the theory and certain additional remarks which are given in the last three sections.

127 citations


Proceedings ArticleDOI
08 Nov 1992
TL;DR: The DS quality measure, a general metric for evaluation of clustering algorithms, is established and motivates the RW-ST algorithm, a self-tuning clustering method based on random walks in the circuit netlist, which efficiently captures a globally good circuit clustering.
Abstract: The complexity of next-generation VLSI systems will exceed the capabilities of top-down layout synthesis algorithms, particularly in netlist partitioning and module placement. Bottom-up clustering is needed to “condense” the netlist so that the problem size becomes tractable to existing optimization methods. In this paper, we establish the DS qua.lity measure, the first general metric for evaluation of clustering algorithms. The DS metric in turn motivates our RWST algorithm, a new self-tuning clustering method based on random walks in the circuit netlist. RWST efficiently captures a globally good circuit clustering. When incorporated within a two-phase iterative Fiduccia-Mattheyses partitioning strategy, the RW-ST clustering method improves bisection width by an average of 17% over previous maiching-based methods.

123 citations


Journal ArticleDOI
TL;DR: The notion of genericity of a permutation group on the set Ω was introduced in this paper, where the idea is that a member of G should be generic if it is in some sense typical.
Abstract: Let G be a permutation group on the set Ω. Usually we take |Ω|=N o . We seek a notion of genericity for members of G. The idea is that a member of G should be generic if it is in some sense typical. We argue that the following is the correct definition. Suppose that G is endowed with a metric so that it becomes a complete metric space. It follows that the Baire category theorem holds, and we may use the notions of meagre and comeagre sets

117 citations


Journal ArticleDOI
TL;DR: The adaptive fuzzy leader clustering (AFLC) architecture is a hybrid neural-fuzzy system that learns online in a stable and efficient manner and successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets.
Abstract: A modular, unsupervised neural network architecture that can be used for clustering and classification of complex data sets is presented. The adaptive fuzzy leader clustering (AFLC) architecture is a hybrid neural-fuzzy system that learns online in a stable and efficient manner. The system used a control structure similar to that found in the adaptive resonance theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two-stage process: a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid position from fuzzy C-means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The AFLC algorithm is applied to the Anderson iris data and laser-luminescent finger image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. >

Journal ArticleDOI
TL;DR: The combination of the common criticality metric and cruciality is shown to provide information about the ‘uncertainty impact’ and ‘controllable benefit’ in the network.
Abstract: The common definition of ‘criticality’ in stochastic networks is insufficiently general, and often counter-intuitive. An alternative metric, ‘cruciality’ has been proposed. The combination of the common criticality metric and cruciality is shown to provide information about the ‘uncertainty impact’ and ‘controllable benefit’ in the network.

Journal ArticleDOI
TL;DR: In this paper, it was shown that there are restrictions on the possible changes of topology of space sections of the universe if this topology change takes place in a compact region which has a Lorentzian metric and spinor structure.
Abstract: It is shown that there are restrictions on the possible changes of topology of space sections of the universe if this topology change takes place in a compact region which has a Lorentzian metric and spinor structure. In particular, it is impossible to create a single wormhole or attach a single handle to a spacetime but it is kinematically possible to create such wormholes in pairs. Another way of saying this is that there is a ℤ2 invariant for a closed oriented 3-manifold Σ which determines whether Σ can be the spacelike boundary of a compact manifoldM which admits a Lorentzian metric and a spinor structure. We evaluate this invariant in terms of the homology groups of Σ and find that it is the mod2 Kervaire semi-characteristic.

Proceedings ArticleDOI
01 Jan 1992
TL;DR: In this paper, a self-tuning clustering method based on random walks in the circuit netlist is proposed to improve bisection width by an average of 17% over previous matching-based methods.
Abstract: It is pointed out that the complexity of next-generation VLSI systems will exceed the capabilities of top-down layout synthesis algorithms, particularly in netlist partitioning and module placement. Bottom-up clustering is needed to condense the netlist so that the problem size becomes tractable to existing optimization methods. Here, the DS quality measure, a general metric for evaluation of clustering algorithms, is established. The DC metric in turn motivates the RW-ST algorithm, a self-tuning clustering method based on random walks in the circuit netlist. RW-ST efficiently captures a globally good circuit clustering. When incorporated within a two-phase iterative Fiduccia-Mattheyses partitioning strategy, the RW-ST clustering method improves bisection width by an average of 17% over previous matching-based methods. >

Journal ArticleDOI
TL;DR: CASE aims at greater automation of software production and the software improvement paradigms of the Software Engineering Institute (SEI), Software Process Capability Maturity Model (CMM), and the University of Mary-land's Tailoring a Measurement Environment (TAME) project are examined.
Abstract: activity in the developed world strives not only to maintain status quo activities and lifestyles, but to improve on them. In particular, the application of technology has been focused on improvement. Technology is central to organized society's efforts to improve the lot of individuals and organizations, regardless of one's opinions of the success or failure of instances of technological application or of the ultimate nature of improvement. This ethos of improvement or \"doing better\" has strongly influenced attitudes toward software development and maintenance. From the software crisis of the mid-1960s, well described in [6], to the present day, many concepts, meth-odologies, languages, tools and techniques have been introduced with the aim of improving the software process and its products. Particular initiatives which we will examine here are CASE and the software improvement paradigms of the Software Engineering Institute (SEI), Software Process Capability Maturity Model (CMM), [10, 13, 14] and the University of Mary-land's Tailoring a Measurement Environment (TAME) project [ 1, 2]. CASE aims at greater automation of software production. Just as CAD/CAM has brought integrated design tools to the engineering of physical systems, CASE is bringing analogous tools to the more abstract engineering of software. Ultimately , the motivation for tool use is economic-for competitive advantage. There are many aspects to competitive advantage, including time-to-market, productivity, quality , product differentiation, distribution and support. Software engineering , however, has a narrower scope, comprising software definition , design, production, and maintenance. CASE aims to improve these activities through the use and integration of software tools. Software improvement has recently received more explicit emphasis , together with a firmer conceptual al and empirical basis, through the work of the SEI on the CMM [10, 13, 14] and the work of Basili and Rombach on the TAME project [1, 2]. Central to both of these major research efforts has been the characterization and improvement of the software process. There are differences in the two improvement paradigms, which

Journal ArticleDOI
TL;DR: The analysis of Voronoi polyhedra for liquid water and hydrogen sulphide, at different temperatures, has been performed by using the molecular configurations generated by computer simulation of the liquids with realistic potential models as discussed by the authors.
Abstract: The analysis of Voronoi polyhedra for liquid water and hydrogen sulphide, at different temperatures, has been performed by using the molecular configurations generated by computer simulation of the liquids with realistic potential models. Some topological and metric properties of the Voronoi polyhedra have been calculated and their distributions are studied. In addition, the cross correlation between pairs of metric quantities are also investigated. The latter correlations are found to be more relevant for a clear distinction between the two systems examined here. In particular, the cross correlation between the potential energy of a molecule and the volume of the corresponding Voronoi polyhedron makes it clear that the interpretation of the anomalous physical properties of water in terms of local volume has to be revised.

Journal ArticleDOI
TL;DR: This paper shows how a qualitative kinematic analysis can be based solely on symbolic reasoning and evaluation of predicates on metric dimensions, which allows symbolic reasoning about kinematics without explicit numerical representations of object dimensions, and automatic generation of operators relating kinematically goals to shape modifications which may achieve them.

01 Jan 1992
TL;DR: This work presents a new technique for curve and surface design that combines a geometrically based specification with constrained optimization (minimization) of a fairness functional, and demonstrates the superiority of curvature variation as a fairness metric and efficacy of optimization as a tool in shape design, albeit at significant computational cost.
Abstract: Traditionally methods for the design of free-form curves and surfaces focus on achieving a specific level of inter-element continuity. These methods use a combination of heuristics and constructions to achieve an ultimate shape. Though shapes constructed using these methods are technically continuous, they have been shown to lack fairness, possessing undesirable blemishes such as bulges and wrinkles. Fairness is closely related to the smooth and minimal variation of curvature. In this work we present a new technique for curve and surface design that combines a geometrically based specification with constrained optimization (minimization) of a fairness functional. The difficult problem of achieving inter-element continuity is solved simply by incorporating it into the minimization via appropriate penalty functions. Where traditional fairness measures are based on strain energy, we have developed a better measure of fairness: the variation of curvature. In addition to producing objects of clearly superior quality, minimizing the variation of curvature makes it trivial to model regular shapes such as, circles and cyclides, a class of surface including: spheres, cylinders, cones, and tori. In this thesis we introduce: curvature variation as a fairness metric, the minimum variation curve (MVC), the minimum variation network (MVN), and the minimum variation surface (MVS). MVC minimize the arc length integral of the square of the arc length derivative of curvature while interpolating a set of geometric constraints consisting of position, and optionally tangent direction and curvature. MVN minimize the same functional while interpolating a network of geometric constraints consisting of surface position, tangent plane, and surface curvatures. Finally, MVS are obtained by spanning the openings of the MVN while minimizing a surface functional that measures the variation of surface curvature. We present the details of the techniques outlined above and describe the trade-offs between some alternative approaches. Solutions to difficult interpolation problems and comparisons with traditional methods are provided. Both demonstrate the superiority of curvature variation as a fairness metric and efficacy of optimization as a tool in shape design, albeit at significant computational cost.

Journal ArticleDOI
F. Gygi1
01 Aug 1992-EPL
TL;DR: In this article, the plane-wave method for electronic-structure calculations is reformulated using generalized curvilinear coordinates, and the search for the solutions of the Schrodinger equation is then cast into an optimization problem in which both the plane wave expansion coefficients and the coordinate system (or the Riemannian metric tensor) are treated as variational parameters.
Abstract: The plane-wave method for electronic-structure calculations is reformulated using generalized curvilinear coordinates. The search for the solutions of the Schrodinger equation is then cast into an optimization problem in which both the plane-wave expansion coefficients and the coordinate system (or the Riemannian metric tensor) are treated as variational parameters. This allows the effective plane-wave energy cut-off to vary in the unit cell in an unbiased way. The method is tested in the calculation of the lowest bound state of an "atom" represented by a Gaussian potential well, showing that the relaxation of the metric dramatically improves the convergence of the plane-wave expansion of the solutions.

Patent
28 May 1992
TL;DR: In this paper, a generic algorithm search is applied to determine an optimum set of values (e.g., interconnection weights in a neural network), each value being associated with a pair of elements drawn from a universe of N elements, N an integer greater than zero, where the utility of any possible set of said values may be measured.
Abstract: A generic algorithm search is applied to determine an optimum set of values (e.g., interconnection weights in a neural network), each value being associated with a pair of elements drawn from a universe of N elements, N an integer greater than zero, where the utility of any possible set of said values may be measured. An initial possible set of values is assembled, the values being organized in a matrix whose rows and columns correspond to the elements. A genetic algorithm operator is applied to generate successor matrices from said matrix. Matrix computations are performed on the successor matrices to generate measures of the relative utilities of the successor matrices. A surviving matrix is selected from the successor matrices on the basis of the metrics. The steps are repeated until the metric of the surviving matrix is satisfactory.


Journal Article
TL;DR: Khintchine's theorem and its extensions are fundamental in the theory of metric Diophantine approximation as discussed by the authors, and they relate the size of the set of φ-approximable points (defined in (2) below) to a property of the function ψ.
Abstract: Khintchine's theorem and its extensions are fundamental in the theory of metric Diophantine approximation. The theorems relate the size of the set of φ-approximable points (defined in (2) below) to a property of the function ψ. For example, in its onedimensional form, Khintchine's theorem asserts that if the function φ : N -» ί? + is decreasing, then either almost all or almost no real numbers χ satisfy

01 Jan 1992
TL;DR: A metric based on pershapes is introduced that provides a quantitative way of measuring how similar two machines are in terms of their performance distributions, related to the extent to which pairs of machines have varying relative performance levels depending on which benchmark is used.
Abstract: Runs of a benchmark or a suite of benchmarks are inadequate either to characterize a given machine or to predict the running time of some benchmark not included in the suite. Further, the observed results are quite sensitive to the nature of the benchmarks, and the relative performance of two machines can vary greatly depending on the benchmarks used. In this dissertation we propose and investigate a new approach to CPU performance evaluation. The main idea is to represent machine performance and program execution in terms of a high level abstract machine model. The model is machine-independent and thus is valid on any uniprocessor. We have developed tools to measure the performance of a variety of machines, from workstations to supercomputers. We have also characterized the execution of many large applications, including the SPEC and Perfect benchmark suites. By merging these machine and program characterizations, we can estimate execution times quite accurately for arbitrary machine-program combinations. Another aspect of the research has consisted in characterizing the effectiveness of optimizing compilers. Another contribution of this dissertation is to propose and investigate new metrics for machine and program similarity and the information that can be derived from them. We define the concept of pershapes, which represent the level of performance of a machine for different types of computation. We introduce a metric based on pershapes that provides a quantitative way of measuring how similar two machines are in terms of their performance distributions. This metric is related to the extent to which pairs of machines have varying relative performance levels depending on which benchmark is used. A similar metric for programs allows us to compare and cluster them according to their dynamic behavior. All this information helps to identify those parameters in machines and programs which are the most important in determining their execution times. Further, it provides a way for designers and users to identify potential bottlenecks in machines, compilers, and applications.

Journal ArticleDOI
01 Apr 1992
TL;DR: In this paper, Chain recurrence and attraction in non-compact spaces is studied in the context of dynamical systems on arbitrary metric spaces, where the point of view taken in the above-mentioned paper was that the given metric was of primary importance rather than the topology that it generated.
Abstract: This paper and Chain recurrence and attraction in noncompact spaces [Ergodic Theory Dynamical Systems (to appear)] are concerned with the question of extending certain results obtained by C Conley for dynamical systems on compact spaces to systems on arbitrary metric spaces The basic result is the analogue of Conley's theorem that characterizes the chain recurrent set of f in terms of the attractors of f and their basins of attraction The point of view taken in the above-mentioned paper was that the given metric was of primary importance rather than the topology that it generated The purpose of this note is to give results that depend on the topology induced by a metric rather than on the particular choice of the metric

Journal ArticleDOI
TL;DR: It is shown that if a change of spatial topology is mediated by a spacetime with an everywhere-nonsingular metric of Lorentzian signature which admits a spinor structure, then the Kervaire semi-characteristic of the boundary plus the kink number of the LorentZian metric on the boundary must vanish modulo 2.
Abstract: We show that if a change of spatial topology is mediated by a spacetime with an everywhere-non-singular metric of Lorentzian signature which admits a spinor structure, then the Kervaire semicharacteristic of the boundary plus the kink number of the Lorentzian metric on the boundary must vanish modulo 2. The kink number is a measure of how many times the light cone tips over on the boundary. It vanishes if the boundary is everywhere spacelike. This result gives a generalization of a previous selection rule: The number of wormholes plus the number of kinks created during a topology change is conserved modulo 2.

Patent
04 Aug 1992
TL;DR: In this article, a performance measure of an ATR is made from ancillary target data and the ATR output, and a parameter of theATR is varied to determine the change in the performance due to the parameter variation.
Abstract: A performance measure of an ATR is made from ancillary target data and the ATR output. A parameter of the ATR is varied to determine the change an ATR performance due to the parameter variation. Separate performance in the form of a quadratic equation models provide performance as a function of parameter and metrics. The performance model is partially differentiated with respect to the parameter. The partial differentiation allows solution for the estimated metric.

Journal ArticleDOI
TL;DR: A new metric which is close to the Euclidean distance and also computationally more efficient is proposed which is helpful when the dimension of the data set is large and shown on a randomly generated data set in the context of clustering.

Book ChapterDOI
08 Jul 1992
TL;DR: The question of whether one can get around this cubic lower bound is examined, and it is shown that under the L1 and L∞ metrics, the time to compute the minimum Hausdorff distance between two point sets is On2 log2n).
Abstract: We consider the following geometric pattern matching problem: find the minimum Hausdorff distance between two point sets under translation with L1 or L∞ as the underlying metric Huttenlocher, Kedem, and Sharir have shown that this minimum distance can be found by constructing the upper envelope of certain Voronoi surfaces Further, they show that if the two sets are each of cardinality n then the complexity of the upper envelope of such surfaces is Ω(n3) We examine the question of whether one can get around this cubic lower bound, and show that under the L1 and L∞ metrics, the time to compute the minimum Hausdorff distance between two point sets is On2 log2n)

Proceedings ArticleDOI
01 Dec 1992
TL;DR: In this paper, the authors present a technique, called true zeroing, that permits direct, quantitative, and fair comparison of parallel program performance metrics, including Gprof, Critical Path, and Quartz/NPT.
Abstract: The authors present a novel technique, called true zeroing, that permits direct, quantitative, and fair comparison of parallel program performance metrics. This technique was applied to three programs that include both numeric and symbolic applications. Three existing metrics, Gprof, Critical Path, and Quartz/NPT, and several new variations were compared. The result of this comparison was that while Critical Path provided the best overall guidance, it was not universally better than the other metrics. Because there is no single universal metric, future parallel performance systems need to support multiple metrics. The authors present a set of recommendations to tool builders based on the experience gained during this case study. >

Journal ArticleDOI
TL;DR: It is shown that relative complexity gives feedback on the same complexity domains that many other metrics do, and developers can save time by choosing one metric to do the work of many.
Abstract: A relative complexity technique that combines the features of many complexity metrics to predict performance and reliability of a computer program is presented. Relative complexity aggregates many similar metrics into a linear compound metric that describes a program. Since relative complexity is a static measure, it is expanded by measuring relative complexity over time to find a program's functional complexity. It is shown that relative complexity gives feedback on the same complexity domains that many other metrics do. Thus, developers can save time by choosing one metric to do the work of many. >