scispace - formally typeset
Search or ask a question

Showing papers on "Metric (mathematics) published in 1998"


Journal ArticleDOI
TL;DR: An example-based learning approach for locating vertical frontal views of human faces in complex scenes and shows empirically that the distance metric adopted for computing difference feature vectors, and the "nonface" clusters included in the distribution-based model, are both critical for the success of the system.
Abstract: We present an example-based learning approach for locating vertical frontal views of human faces in complex scenes. The technique models the distribution of human face patterns by means of a few view-based "face" and "nonface" model clusters. At each image location, a difference feature vector is computed between the local image pattern and the distribution-based model. A trained classifier determines, based on the difference feature vector measurements, whether or not a human face exists at the current image location. We show empirically that the distance metric we adopt for computing difference feature vectors, and the "nonface" clusters we include in our distribution-based model, are both critical for the success of our system.

2,013 citations


Proceedings ArticleDOI
04 Jan 1998
TL;DR: This paper uses the Earth Mover's Distance to exhibit the structure of color-distribution and texture spaces by means of Multi-Dimensional Scaling displays, and proposes a novel approach to the problem of navigating through a collection of color images, which leads to a new paradigm for image database search.
Abstract: We introduce a new distance between two distributions that we call the Earth Mover's Distance (EMD), which reflects the minimal amount of work that must be performed to transform one distribution into the other by moving "distribution mass" around. This is a special case of the transportation problem from linear optimization, for which efficient algorithms are available. The EMD also allows for partial matching. When used to compare distributions that have the same overall mass, the EMD is a true metric, and has easy-to-compute lower bounds. In this paper we focus on applications to image databases, especially color and texture. We use the EMD to exhibit the structure of color-distribution and texture spaces by means of Multi-Dimensional Scaling displays. We also propose a novel approach to the problem of navigating through a collection of color images, which leads to a new paradigm for image database search.

1,828 citations


Book
01 Jan 1998
TL;DR: A group theoretical approach to hydrodynamics is proposed in this article, where the authors consider the hydrodynamic geometry of diffeomorphism groups and the principle of least action implies that the motion of a fluid is described by geodesics on the group in the right-invariant Riemannian metric given by the kinetic energy.
Abstract: A group theoretical approach to hydrodynamics considers hydrodynamics to be the differential geometry of diffeomorphism groups. The principle of least action implies that the motion of a fluid is described by the geodesics on the group in the right-invariant Riemannian metric given by the kinetic energy. Investigation of the geometry and structure of such groups turns out to be useful for describing the global behavior of fluids for large time intervals.

1,574 citations


Journal ArticleDOI
TL;DR: This paper describes an approach that integrates both paradigms: grid-based and topological, which gains advantages from both worlds: accuracy/consistency and efficiency.

1,140 citations


Journal ArticleDOI
TL;DR: The theory of quasiconformal maps in metric spaces that satisfy certain bounds on their mass and geometry has been studied in this article, and the main message is that such a theory is both relevant and viable.
Abstract: This paper develops the foundations of the theory of quasiconformal maps in metric spaces that satisfy certain bounds on their mass and geometry. The principal message is that such a theory is both relevant and viable. The first main issue is the problem of definition, which we next describe. Quasiconformal maps are commonly understood as homeomorphisms that distort the shape of infinitesimal balls by a uniformly bounded amount. This requirement makes sense in every metric space. Given a homeomorphism f from a metric space X to a metric space Y , then for x∈X and r>0 set

1,003 citations


Journal ArticleDOI
TL;DR: The stochastic model allows us to learn a string-edit distance function from a corpus of examples and is applicable to any string classification problem that may be solved using a similarity function against a database of labeled prototypes.
Abstract: In many applications, it is necessary to determine the similarity of two strings. A widely-used notion of string similarity is the edit distance: the minimum number of insertions, deletions, and substitutions required to transform one string into the other. In this report, we provide a stochastic model for string-edit distance. Our stochastic model allows us to learn a string-edit distance function from a corpus of examples. We illustrate the utility of our approach by applying it to the difficult problem of learning the pronunciation of words in conversational speech. In this application, we learn a string-edit distance with nearly one-fifth the error rate of the untrained Levenshtein distance. Our approach is applicable to any string classification problem that may be solved using a similarity function against a database of labeled prototypes.

897 citations


Proceedings ArticleDOI
04 Jan 1998
TL;DR: A self-calibration method is presented which efficiently deals with all kinds of constraints on the internal camera parameters and a practical method is proposed which can retrieve metric reconstruction from image sequences obtained with uncalibrated zooming/focusing cameras.
Abstract: In this paper the feasibility of self-calibration in the presence of varying internal camera parameters is under investigation. A self-calibration method is presented which efficiently deals with all kinds of constraints on the internal camera parameters. Within this framework a practical method is proposed which can retrieve metric reconstruction from image sequences obtained with uncalibrated zooming/focusing cameras. The feasibility of the approach is illustrated on real and synthetic examples.

896 citations


Proceedings ArticleDOI
23 Feb 1998
TL;DR: This work proposes a modification of the so called "FastMap", to map sequences into points, with little compromise of "recall" (typically zero), and a fast linear test, to help to discard quickly many of the false alarms that FastMap will typically introduce.
Abstract: Fast similarity searching in large time sequence databases has typically used Euclidean distance as a dissimilarity metric. However, for several applications, including matching of voice, audio and medical signals (e.g., electrocardiograms), one is required to permit local accelerations and decelerations in the rate of sequences, leading to a popular, field tested dissimilarity metric called the "time warping" distance. From the indexing viewpoint, this metric presents two major challenges: (a) it does not lead to any natural indexable "features", and (b) comparing two sequences requires time quadratic in the sequence length. To address each problem, we propose to use: (a) a modification of the so called "FastMap", to map sequences into points, with little compromise of "recall" (typically zero); and (b) a fast linear test, to help us discard quickly many of the false alarms that FastMap will typically introduce. Using both ideas in cascade, our proposed method achieved up to an order of magnitude speed-up over sequential scanning on both real and synthetic datasets.

771 citations


Journal ArticleDOI
TL;DR: In this paper, a third use of singular vectors is proposed as part of a strategy to target adaptive observations to “sensitive” parts of the atmosphere using unmanned aircraft, though calculations in this paper are motivated by the upstream component of the Fronts and Atlantic Storm-Track Experiment.
Abstract: Singular vectors of the linearized equations of motion have been used to study the instability properties of the atmosphere–ocean system and its related predictability. A third use of these singular vectors is proposed here: as part of a strategy to target adaptive observations to “sensitive” parts of the atmosphere. Such observations could be made using unmanned aircraft, though calculations in this paper are motivated by the upstream component of the Fronts and Atlantic Storm-Track Experiment. Oceanic applications are also discussed. In defining this strategy, it is shown that there is, in principle, no freedom in the choice of inner product or metric for the singular vector calculation. However, the correct metric is dependent on the purpose for making the targeted observations (to study precursor developments or to improve forecast initial conditions). It is argued that for predictability studies, where both the dynamical instability properties of the system and the specification of the opera...

484 citations


Journal ArticleDOI
Charles H. Bennett1, Peter Gacs, Ming Li, Paul M. B. Vitányi, Wojciech H. Zurek 
TL;DR: It is shown that the information distance is a universal cognitive similarity distance and investigated the maximal correlation of the shortest programs involved, the maximal uncorrelation of programs, and the density properties of the discrete metric spaces induced by the information distances.
Abstract: While Kolmogorov (1965) complexity is the accepted absolute measure of information content in an individual finite object, a similarly absolute notion is needed for the information distance between two individual objects, for example, two pictures. We give several natural definitions of a universal information metric, based on length of shortest programs for either ordinary computations or reversible (dissipationless) computations. It turns out that these definitions are equivalent up to an additive logarithmic term. We show that the information distance is a universal cognitive similarity distance. We investigate the maximal correlation of the shortest programs involved, the maximal uncorrelation of programs (a generalization of the Slepian-Wolf theorem of classical information theory), and the density properties of the discrete metric spaces induced by the information distances. A related distance measures the amount of nonreversibility of a computation. Using the physical theory of reversible computation, we give an appropriate (universal, antisymmetric, and transitive) measure of the thermodynamic work required to transform one object in another object by the most efficient process. Information distance between individual objects is needed in pattern recognition where one wants to express effective notions of "pattern similarity" or "cognitive similarity" between individual objects and in thermodynamics of computation where one wants to analyze the energy dissipation of a computation from a particular input to a particular output.

479 citations


Proceedings ArticleDOI
18 Oct 1998
TL;DR: This work presents a natural extension of the original error metric that can account for a wide range of vertex attributes and can rapidly produce high quality approximations of complex polygonal surface models.
Abstract: There are a variety of application areas in which there is a need for simplifying complex polygonal surface models. These models often have material properties such as colors, textures, and surface normals. Our surface simplification algorithm, based on iterative edge contraction and quadric error metrics, can rapidly produce high quality approximations of such models. We present a natural extension of our original error metric that can account for a wide range of vertex attributes.

Proceedings ArticleDOI
01 Jun 1998
TL;DR: The results of experiments demonstrate that the Pyramid-Technique outperforms the X-tree and the Hilbert R-tree by a factor of up to 14 (number of page accesses) and up to 2500 (total elapsed time) for range queries.
Abstract: In this paper, we propose the Pyramid-Technique, a new indexing method for high-dimensional data spaces. The Pyramid-Technique is highly adapted to range query processing using the maximum metric Lmax. In contrast to all other index structures, the performance of the Pyramid-Technique does not deteriorate when processing range queries on data of higher dimensionality. The Pyramid-Technique is based on a special partitioning strategy which is optimized for high-dimensional data. The basic idea is to divide the data space first into 2d pyramids sharing the center point of the space as a top. In a second step, the single pyramids are cut into slices parallel to the basis of the pyramid. These slices from the data pages. Furthermore, we show that this partition provides a mapping from the given d-dimensional space to a 1-dimensional space. Therefore, we are able to use a B+-tree to manage the transformed data. As an analytical evaluation of our technique for hypercube range queries and uniform data distribution shows, the Pyramid-Technique clearly outperforms index structures using other partitioning strategies. To demonstrate the practical relevance of our technique, we experimentally compared the Pyramid-Technique with the X-tree, the Hilbert R-tree, and the Linear Scan. The results of our experiments using both, synthetic and real data, demonstrate that the Pyramid-Technique outperforms the X-tree and the Hilbert R-tree by a factor of up to 14 (number of page accesses) and up to 2500 (total elapsed time) for range queries.

Proceedings ArticleDOI
23 Jun 1998
TL;DR: The novel contributions are that in a stratified context the various forms of providing metric information can be represented as circular constraints on the parameters of an affine transformation of the plane, providing a simple and uniform framework for integrating constraints.
Abstract: We describe the geometry constraints and algorithmic implementation for metric rectification of planes. The rectification allows metric properties, such as angles and length ratios, to be measured on the world plane from a perspective image. The novel contributions are: first, that in a stratified context the various forms of providing metric information, which include a known angle, two equal though unknown angles, and a known length ratio; can all be represented as circular constraints on the parameters of an affine transformation of the plane-this provides a simple and uniform framework for integrating constraints; second, direct rectification from right angles in the plane; third, it is shown that metric rectification enables calibration of the internal camera parameters; fourth, vanishing points are estimated using a Maximum Likelihood estimator; fifth, an algorithm for automatic rectification. Examples are given for a number of images, and applications demonstrated for texture map acquisition and metric measurements.

ReportDOI
01 Dec 1998
TL;DR: A SVM -based face recognition algorithm that is compared with a principal component analysis (PCA) based algorithm on a difficult set of images from the FERET database and generated a similarity metric between faces that is learned from examples of differences between faces.
Abstract: Face recognition is a K class problem. where K is the number of known individuals; and support vector machines (SVMs) are a binary classification method. By reformulating the face recognition problem and reinterpreting the output of the SVM classifier. we developed a SVM -based face recognition algorithm. The face recognition problem is formulated as a problem in difference space. which models dissimilarities between two facial images. In difference space we formulate face recognition as a two class problem. The classes are: dissimilarities between faces of the same person. and dissimilarities between faces of different people. By modifying the interpretation of the decision surface generated by SVM. we generated a similarity metric between faces that is learned from examples of differences between faces. The SVM-based algorithm is compared with a principal component analysis (PCA) based algorithm on a difficult set of images from the FERET database. Performance was measured for both verification and identification scenarios. The identification performance for SVM is 77-78% versus 54% for PCA. For verification. the equal error rate is 7% for SVM and 13% for PCA.

Journal Article
TL;DR: It is explained how the chi- squared statistic compensates for the implicit assumption of a Euclidean distance measure being the shortest path between two points in high dimensional space and how the Bhattacharyya metric requires no such standardization and has no singularity problems with zero count-data.
Abstract: This paper highlights advantageous properties of the Bhattacharyya metric over the chi-squared statistic for comparing frequency distributed data. The original interpretation of the Bhattacharyya metric as a geometric similarity measure is reviewed and it is pointed out that this derivation is independent of the use of the Bhattacharyya measure as an upper bound on the probability of misclassification in a two-class problem. The affinity between the Bhattacharyya and Matusita measures is described and we suggest use of the Bhattacharyya measure for comparing histogram data. We explain how the chi- squared statistic compensates for the implicit assumption of a Euclidean distance measure being the shortest path between two points in high dimensional space. By using the square-root transformation the Bhattacharyya metric requires no such standardization and by its multiplicative nature has no singularity problems (unlike those caused by the denominator of the chi- squared statistic) with zero count-data.

Journal ArticleDOI
TL;DR: In this paper, the authors focus on minimum contrast estimators on sieves, which are commonly used in practice as D-dimensional linear spaces generated by some basis: piecewise polynomials, wavelets, Fourier, etc.
Abstract: This paper, which we dedicate to Lucien Le Cam for his seventieth birthday, has been written in the spirit of his pioneering works on the relationships between the metric structure of the parameter space and the rate of convergence of optimal estimators. It has been written in his honour as a contribution to his theory. It contains further developments of the theory of minimum contrast estimators elaborated in a previous paper. We focus on minimum contrast estimators on sieves. By a `sieve' we mean some approximating space of the set of parameters. The sieves which are commonly used in practice are D-dimensional linear spaces generated by some basis: piecewise polynomials, wavelets, Fourier, etc. It was recently pointed out that nonlinear sieves should also be considered since they provide better spatial adaptation (think of histograms built from any partition of D subintervals of [0,1] as a typical example). We introduce some metric assumptions which are closely related to the notion of finite-dimensional metric space in the sense of Le Cam. These assumptions are satisfied by the examples of practical interest and allow us to compute sharp rates of convergence for minimum contrast estimators.

Journal ArticleDOI
TL;DR: This paper constructs a distance between deformations defined through a metric given the cost of infinitesimal deformations, and proposes a numerical scheme to solve a variational problem involving this distance and leading to a sub-optimal gradient pattern matching.
Abstract: In a previous paper, it was proposed to see the deformations of a common pattern as the action of an infinite dimensional group. We show in this paper that this approac h can be applied numerically for pattern matching in image analysis of digital images. Using Lie group ideas, we construct a distance between deformations defined through a metric given the cost of infinitesimal deformations. Then we propose a numerical scheme to solve a variational problem involving this distance and leading to a sub-optimal gradient pattern matching. Its links with fluid models are established.

Journal ArticleDOI
TL;DR: R-MUSIC can easily extract multiple asynchronous dipolar sources that are difficult to find using the original MUSIC scan and is applied to the more general IT model and shows results for combinations of fixed, rotating, and synchronous dipoles.
Abstract: The multiple signal classification (MUSIC) algorithm can be used to locate multiple asynchronous dipolar sources from electroencephalography (EEG) and magnetocncephalography (MEG) data. The algorithm scans a single-dipole model through a three-dimensional (3-D) head volume and computes projections onto an estimated signal subspace. To locate the sources, the user must search the head volume for multiple local peaks in the projection metric. This task is time consuming and subjective. Here, the authors describe an extension of this approach which they refer to as recursive MUSIC (R-MUSIC). This new procedure automatically extracts the locations of the sources through a recursive use of subspace projections. The new method is also able to locate synchronous sources through the use of a spatio-temporal independent topographies (IT) model. This model defines a source as one or more nonrotating dipoles with a single time course. Within this framework, the authors are able to locate fixed, rotating, and synchronous dipoles. The recursive subspace projection procedure that they introduce here uses the metric of canonical or subspace correlations as a multidimensional form of correlation analysis between the model subspace and the data subspace, by recursively computing subspace correlations, the authors build up a model for the sources which account for a given set of data. They demonstrate here how R-MUSIC can easily extract multiple asynchronous dipolar sources that are difficult to find using the original MUSIC scan. The authors then demonstrate R-MUSIC applied to the more general IT model and show results for combinations of fixed, rotating, and synchronous dipoles.

Book
31 Dec 1998
TL;DR: Rodabaugh as mentioned in this paper proposed Axiomatic Foundations of Fixed-Based Fuzzy Topology U.E. Rodabaugh, A.H. Hoehle, S.P. Sostak, and S. Weber.
Abstract: 1. Many-Valued Logic and Fuzzy Set Theory S. Gottwald. 2. Powerset Operator Foundations for Poslat Fuzzy Set Theories and Topologies S.E. Rodabaugh. 3. Axiomatic Foundations of Fixed-Based Fuzzy Topology U. Hoehle, A.P. Sostak. 4. Categorical Foundations of Variable-Basis Fuzzy Topology S.E. Rodabaugh. 5. Characterization of L-Topologies by L-Valued Neighborhoods U. Hoehle. 6. Separation Axioms: Extension of Mappings and Embedding of Spaces T. Kubiak. 7. Separation Axioms: Representation Theorems, Compactness, and Compactifications S.E. Rodabaugh. 8. Uniform Spaces W. Kotze. 9. Extensions of Uniform Space Notions M.H. Burton, J. Gutierrez Garcia. 10. Fuzzy Real Lines and Dual Real Lines as Poslat Topological, Uniform, and Metric Ordered Semirings with Unity S.E. Rodabaugh. 11. Fundamentals of Generalized Measure Theory E.P. Klement, S. Weber. 12. On Conditioning Operators U. Hoehle, S. Weber. 13. Applications of Decomposable Measures E. Pap. 14. Fuzzy Random Variables Revisited D.A. Ralescu.

01 Apr 1998
TL;DR: The develop-ment of a metric that predicts controller workload as afunction of air traffic characteristics in a volume of airspace is essential to the development of both air traffic management automation and air traffic procedures.
Abstract: The definition of a metric of air traffic controller workload based on air traffic characteristics is essential to the development of both air traffic management automation and air traffic procedures. Dynamic density is a proposed concept for a metric that includes both traffic density (a count of aircraft in a volume of airspace) and traffic complexity (a measure of the complexity of the air traffic in a volume of airspace). It was hypothesized that a metric that includes terms that capture air traffic complexity will be a better measure of air traffic controller workload than current measures based only on traffic density. A weighted linear dynamic density function was developed and validated operationally. The proposed dynamic density function includes a traffic density term and eight traffic complexity terms. A unit-weighted dynamic density function was able to account for an average of 22% of the variance in observed controller activity not accounted for by traffic density alone. A comparative analysis of unit weights, subjective weights, and regression weights for the terms in the dynamic density equation was conducted. The best predictor of controller activity was the dynamic density equation with regression-weighted complexity terms.

Journal Article
TL;DR: In this article, a compact Lie group acts ergodically on a unital C^*-algebra, and several ways of using this structure to define metrics on the state space of $A$ are considered.
Abstract: Let a compact Lie group act ergodically on a unital $C^*$-algebra $A$. We consider several ways of using this structure to define metrics on the state space of $A$. These ways involve length functions, norms on the Lie algebra, and Dirac operators. The main thrust is to verify that the corresponding metric topologies on the state space agree with the weak-$*$ topology.

Proceedings ArticleDOI
08 Nov 1998
TL;DR: This paper derandomizes the use of Bartal's algorithm and obtains the first deterministic approximation algorithms for buy-at-bulk network design and vehicle routing and a novel view of probabilistic approximation of metric spaces as a deterministic optimization problem via linear programming.
Abstract: Y. Bartal (1996, 1998) gave a randomized polynomial time algorithm that given any n point metric G, constructs a tree T such that the expected stretch (distortion) of any edge is at most O (log n log log n). His result has found several applications and in particular has resulted in approximation algorithms for many graph optimization problems. However approximation algorithms based on his result are inherently randomized. In this paper we derandomize the use of Bartal's algorithm in the design of approximation algorithms. We give an efficient polynomial time algorithm that given a finite n point metric G, constructs O(n log n) trees and a probability distribution /spl mu/ on them such that the expected stretch of any edge of G in a tree chosen according to /spl mu/ is at most O(log n log log n). Our result establishes that finite metrics can be probabilistically approximated by a small number of tree metrics. We obtain the first deterministic approximation algorithms for buy-at-bulk network design and vehicle routing; in addition we subsume results from our earlier work on derandomization. Our main result is obtained by a novel view of probabilistic approximation of metric spaces as a deterministic optimization problem via linear programming.

Book
01 Aug 1998
TL;DR: In 5D cosmology and astrophysics in 5D 5D electromagnetism the canonical metric and fifth force canonical solutions and physical quantities retrospect and prospect are described in this paper.
Abstract: Concepts and theories of physics, induced-matter theory the classical and other tests in 5D cosmology and astrophysics in 5D 5D electromagnetism the canonical metric and fifth force canonical solutions and physical quantities retrospect and prospect.

Journal ArticleDOI
TL;DR: The computational model BX is used to give domain-theoretic proofs of Banach's fixed point theorem and of two classical results of Hutchinson: on a complete metric space, every hyperbolic iterated function system has a unique non-empty compact attractor, and every iteratedfunction system with probabilities has aunique invariant measure with bounded support.

Proceedings Article
01 Jan 1998
TL;DR: A model for quantifying and reasoning about trust in IT equipment is proposed, and the model consists of a belief model and set of operators for combining beliefs.
Abstract: This paper proposes a model for quantifying and reasoning about trust in IT equipment. Trust is considered to be a subjective belief, and the model consists of a belief model and set of operators for combining beliefs. Security evaluation is being discussed as a method for determining trust. Trust may also be based on other types of evidence such as for example ISO 9000 certification, and the model can be used to quantify and compare the contribution to the total trust each type of evidence provides.

Proceedings Article
01 Jul 1998
TL;DR: This paper poses the mapping problem as a statistical maximum likelihood problem, and devises an efficient algorithm for search in likelihood space that integrates two phases: a topological and a metric mapping phase.
Abstract: The problem of concurrent mapping and localization has received considerable attention in the mobile robotics community. Existing approaches can largely be grouped into two distinct paradigms: topological and metric. This paper proposes a method that integrates both. It poses the mapping problem as a statistical maximum likelihood problem, and devises an efficient algorithm for search in likelihood space. It presents an novel mapping algorithm that integrates two phases: a topological and a metric mapping phase. The topological mapping phase solves a global position alignment problem between potentially indistinguishable, significant places. The subsequent metric mapping phase produces a fine-grained metric map of the environment in floating-point resolution. The approach is demonstrated empirically to scale up to large, cyclic, and highly ambiguous environments.

Patent
16 Dec 1998
TL;DR: In this article, a coarse placer is used in conjunction with other automatic design tools such as a detailed placer and an automatic wire router to generate coarse placement of cells on a 2-dimensional silicon chip or circuit board.
Abstract: A computer implemented process for automatic creation of integrated circuit (IC) geometry using a computer. The present invention includes a general unconstrained non-linear optimization method to generate coarse placement of cells on a 2-dimensional silicon chip or circuit board. In one embodiment, the coarse placer can also be used to automatically size cells, insert and size buffers, and aid in timing driven structuring of the placed circuit. The coarse placer is used in conjunction with other automatic design tools such as a detailed placer and an automatic wire router. A master objective function (MOF) is defined which evaluates a particular cell placement. A non-linear optimization process finds an assignment of values to the function variables which minimizes the MOF. The MOF is a weighted sum of functions which evaluate various metrics. An important metric for consideration is the density metric, which measures how well spread out the cells are in the placement. Other component functions are wire-length, which measures total linear wire-length, delay, which measures circuit timing, and power, which measures circuit power consumption. The barrier metric penalizes placements with cells outside the allowed placement region. A conjugate-gradient process utilizes both the MOF and its gradient to determine a next cell placement. The gradient is the vector of partial derivatives of the MOF with respect to all variables. The non-linear optimization process calls the MOF and gradient function subroutines and uses the results to minimize the MOF.

Journal Article
TL;DR: A model for the geometry of spatial relations was calibrated for a set of 59 English-language spatial predicates to provide a basis for high-level spatial query languages that exploit natural-language terms and serves as a model for processing such queries.
Abstract: relations are the basis for many selections users perform when they query geographic information systems (GISs). Although such query languages use natural-language-like terms, the formal definitions of those spatial relations rarely reflect the same meaning people would apply when they communicate among each other. To bridge the gap between computational models for spatial relations and people's use of spatial terms in their natural languages, a model for the geometry of spatial relations was calibrated for a set of 59 English-language spatial predicates. The model distinguishes topological and metric properties. The calibration from sketches that were drawn by 34 human subjects identifies ten groups of spatial terms with similar properties and provides a mapping from spatial terms onto significant geometric parameters and their values. The calibration's results reemphasize the importance of topological over metric properties in the selection of English-language spatial terms. The model provides a basis for high-level spatial query languages that exploit natural-language terms and serves as a model for processing such queries.

Patent
30 Jul 1998
TL;DR: In this article, a method to measure channel quality in terms of signal to noise ratio for the transmission of coded signals over fading channels was proposed, where a Viterbi decoder metric for the Maximum Likelihood path was used as a channel quality measure.
Abstract: A system and method to measure channel quality in terms of signal to noise ratio for the transmission of coded signals over fading channels. A Viterbi decoder metric for the Maximum Likelihood path is used as a channel quality measure. This Euclidean distance metric is filtered in order to smooth out short term variations. The filtered or averaged metric is a reliable channel quality measure which remains consistent across different coded modulation schemes and at different mobile speeds. The filtered metric is mapped to the signal to noise ratio per symbol using a threshold based scheme. Use of this implicit signal to noise ratio estimate is used for the mobile assisted handoff and data rate adaptation in the transmitter.

Journal ArticleDOI
TL;DR: Evaluating Chidamber and Kemerer metrics by using Kitchenham's metric-evaluation framework and finds deficiencies in some of the C&K metrics, and proposes another metric suite for object-oriented programming.