scispace - formally typeset
Search or ask a question

Showing papers on "Metric (mathematics) published in 2001"


Book ChapterDOI
04 Jan 2001
TL;DR: This paper examines the behavior of the commonly used L k norm and shows that the problem of meaningfulness in high dimensionality is sensitive to the value of k, which means that the Manhattan distance metric is consistently more preferable than the Euclidean distance metric for high dimensional data mining applications.
Abstract: In recent years, the effect of the curse of high dimensionality has been studied in great detail on several problems such as clustering, nearest neighbor search, and indexing. In high dimensional space the data becomes sparse, and traditional indexing and algorithmic techniques fail from a efficiency and/or effectiveness perspective. Recent research results show that in high dimensional space, the concept of proximity, distance or nearest neighbor may not even be qualitatively meaningful. In this paper, we view the dimensionality curse from the point of view of the distance metrics which are used to measure the similarity between objects. We specifically examine the behavior of the commonly used Lk norm and show that the problem of meaningfulness in high dimensionality is sensitive to the value of k. For example, this means that the Manhattan distance metric (L1 norm) is consistently more preferable than the Euclidean distance metric (L2 norm) for high dimensional data mining applications. Using the intuition derived from our analysis, we introduce and examine a natural extension of the Lk norm to fractional distance metrics. We show that the fractional distance metric provides more meaningful results both from the theoretical and empirical perspective. The results show that fractional distance metrics can significantly improve the effectiveness of standard clustering algorithms such as the k-means algorithm.

1,614 citations


Journal ArticleDOI
TL;DR: A principled distance between two basic probability assignments (BPAs) (or two bodies of evidence) based on a quantification of the similarity between sets is introduced based on the evidential theory of Dempster–Shafer.

832 citations


Journal ArticleDOI
01 Apr 2001
TL;DR: This paper presents a new method for simultaneous localization and mapping that exploits the topology of the robot's free space to localize the robot on a partially constructed map using the generalized Voronoi graph (GVG).
Abstract: This paper presents a new method for simultaneous localization and mapping that exploits the topology of the robot's free space to localize the robot on a partially constructed map. The topology of the environment is encoded in a topological map; the particular topological map used in this paper is the generalized Voronoi graph (GVG), which also encodes some metric information about the robot's environment, as well. In this paper, we present the low-level control laws that generate the GVG edges and nodes, thereby allowing for exploration of an unknown space. With these prescribed control laws, the GVG can be viewed as an arbitrator for a hybrid control system that determines when to invoke a particular low-level controller from a set of controllers all working toward the high-level capability of mobile robot exploration. The main contribution, however, is using the graph structure of the GVG, via a graph matching process, to localize the robot. Experimental results verify the described work.

681 citations


Journal ArticleDOI
TL;DR: Here Shepard's theory is recast in a more general Bayesian framework and it is shown how this naturally extends his approach to the more realistic situation of generalizing from multiple consequential stimuli with arbitrary representational structure.
Abstract: Shepard has argued that a universal law should govern generalization across different domains of perception and cognition, as well as across organisms from different species or even different planets. Starting with some basic assumptions about natural kinds, he derived an exponential decay function as the form of the universal generalization gradient, which accords strikingly well with a wide range of empirical data. However, his original formulation applied only to the ideal case of generalization from a single encountered stimulus to a single novel stimulus, and for stimuli that can be represented as points in a continuous metric psychological space. Here we recast Shepard's theory in a more general Bayesian framework and show how this naturally extends his approach to the more realistic situation of generalizing from multiple consequential stimuli with arbitrary representational structure. Our framework also subsumes a version of Tversky's set-theoretic model of similarity, which is conventionally thought of as the primary alternative to Shepard's continuous metric space model of similarity and generalization. This unification allows us not only to draw deep parallels between the set-theoretic and spatial approaches, but also to significantly advance the explanatory power of set-theoretic models.

681 citations


Proceedings ArticleDOI
16 Apr 2001
TL;DR: A distributed clustering algorithm, MOBIC, is proposed based on the use of this mobility metric for selection of clusterheads, and it is demonstrated that it leads to more stable cluster formation than the "least clusterhead change" version of the well known Lowest-ID clustering algorithms.
Abstract: We present a novel relative mobility metric for mobile ad hoc networks (MANETs). It is based on the ratio of power levels due to successive receptions at each node from its neighbors. We propose a distributed clustering algorithm, MOBIC, based on the use of this mobility metric for selection of clusterheads, and demonstrate that it leads to more stable cluster formation than the "least clusterhead change" version of the well known Lowest-ID clustering algorithm (Chiang et al., 1997). We show reduction of as much as 33% in the rate of clusterhead changes owing to the use of the proposed technique. In a MANET that uses scalable cluster-based services, network performance metrics such as throughput and delay are tightly coupled with the frequency of cluster reorganization. Therefore, we believe that using MOBIC can result in a more stable configuration, and thus yield better performance.

680 citations


01 Jan 2001
TL;DR: The Digital Video Quality (DVQ) metric as discussed by the authors is based on the Discrete Cosine Transform (DCT) and incorporates aspects of early visual processing, including light adaptation, luminance and chromatic channels, spatial and temporal filtering, spatial frequency channels, contrast masking, and probability summation.
Abstract: The growth of digital video has given rise to a need for computational methods for evaluating the visual quality of digital video. We have developed a new digital video quality metric, which we call DVQ (Digital Video Quality) 1 . Here we provide a brief description of the metric, and give a preliminary report on its performance. DVQ accepts a pair of digital video sequences, and computes a measure of the magnitude of the visible difference between them. The metric is based on the Discrete Cosine Transform. It incorporates aspects of early visual processing, including light adaptation, luminance and chromatic channels, spatial and temporal filtering, spatial frequency channels, contrast masking, and probability summation. It also includes primitive dynamics of light adaptation and contrast masking. We have applied the metric to digital video sequences corrupted by various typical compression artifacts, and compared the results to quality ratings made by human observers.

376 citations


Proceedings ArticleDOI
01 Dec 2001
TL;DR: A gait-recognition technique that recovers static body and stride parameters of subjects as they walk is presented and an expected confusion metric is derived, related to mutual information, as opposed to reporting a percent correct with a limited database.
Abstract: A gait-recognition technique that recovers static body and stride parameters of subjects as they walk is presented. This approach is an example of an activity-specific biometric: a method of extracting identifying properties of an individual or of an individual's behavior that is applicable only when a person is performing that specific action. To evaluate our parameters, we derive an expected confusion metric (related to mutual information), as opposed to reporting a percent correct with a limited database. This metric predicts how well a given feature vector will filter identity in a large population. We test the utility of a variety of body and stride parameters recovered in different viewing conditions on a database consisting of 15 to 20 subjects walking at both an angled and frontal-parallel view with respect to the camera, both indoors and out. We also analyze motion-capture data of the subjects to discover whether confusion in the parameters is inherently a physical or a visual measurement error property.

368 citations


Journal ArticleDOI
TL;DR: In this paper, the authors fully extend to the Heisenberg group endowed with its intrinsic Carnot-Caratheodory metric and perimeter the classical De Giorgi's rectifiability divergence theorems.
Abstract: In this paper, we fully extend to the Heisenberg group endowed with its intrinsic Carnot-Caratheodory metric and perimeter the classical De Giorgi's rectifiability divergence theorems.

364 citations


Journal ArticleDOI
TL;DR: In this paper, the geometric interpretation of the expected value and the variance in real Euclidean space is used as a starting point to introduce metric counterparts on an arbitrary finite dimensional Hilbert space.
Abstract: The geometric interpretation of the expected value and the variance in real Euclidean space is used as a starting point to introduce metric counterparts on an arbitrary finite dimensional Hilbert space. This approach allows us to define general reasonable properties for estimators of parameters, like metric unbiasedness and minimum metric variance, resulting in a useful tool to better understand the logratio approach to the statistical analysis of compositional data, who's natural sample space is the simplex.

362 citations


Proceedings ArticleDOI
03 Oct 2001
TL;DR: In this paper, a modified Karhunen-Loeve transform is proposed for 3D object retrieval, which takes into account not only vertices or polygon centroids from the 3D models but all points in the polygons of the objects.
Abstract: We present tools for 3D object retrieval in which a model, a polygonal mesh, serves as a query and similar objects are retrieved from a collection of 3D objects. Algorithms proceed first by a normalization step (pose estimation) in which models are transformed into a canonical coordinate frame. Second, feature vectors are extracted and compared with those derived from normalized models in the search space. Using a metric in the feature vector space nearest neighbors are computed and ranked. Objects thus retrieved are displayed for inspection, selection, and processing. For the pose estimation we introduce a modified Karhunen-Loeve transform that takes into account not only vertices or polygon centroids from the 3D models but all points in the polygons of the objects. Some feature vectors can be regarded as samples of functions on the 2-sphere. We use Fourier expansions of these functions as uniform representations allowing embedded multi-resolution feature vectors. Our implementation demonstrates and visualizes these tools.

345 citations


Proceedings ArticleDOI
02 Apr 2001
TL;DR: A new distance function D/sub tw-lb/ that consistently underestimates the time warping distance and also satisfies the triangular inequality is devised and achieves significant speedup up to 43 times with real-world S&P 500 stock data and up to 720 times with very large synthetic data.
Abstract: This paper proposes a new novel method for similarity search that supports time warping in large sequence databases. Time warping enables finding sequences with similar patterns even when they are of different lengths. Previous methods for processing similarity search that supports time warping fail to employ multi-dimensional indexes without false dismissal since the time warping distance does not satisfy the triangular inequality. Our primary goal is to innovate on search performance without permitting any false dismissal. To attain this goal, we devise a new distance function D/sub tw-lb/ that consistently underestimates the time warping distance and also satisfies the triangular inequality D/sub tw-lb/ uses a 4-tuple feature vector that is extracted from each sequence and is invariant to time warping. For efficient processing of similarity search, we employ a multi-dimensional index that uses the 4-tuple feature vector as indexing attributes and D/sub tw-lb/ as a distance function. The extensive experimental results reveal that our method achieves significant speedup up to 43 times with real-world S&P 500 stock data and up to 720 times with very large synthetic data.

Journal ArticleDOI
TL;DR: A new digital video quality metric, which is based on the discrete cosine transform, which incorporates aspects of early visual pro- cessing, including light adaptation, luminance, and chromatic chan- nels; spatial and temporal filtering; spatial frequency channels; con- trast masking; and probability summation.
Abstract: The growth of digital video has given rise to a need for computational methods for evaluating the visual quality of digital video. We have developed a new digital video quality metric, which we call DVQ (digital video quality) (A. B. Watson, in Human Vision, Visual Processing, and Digital Display VIII, Proc. SPIE 3299, 139- 147 (1998)). Here, we provide a brief description of the metric, and give a preliminary report on its performance. DVQ accepts a pair of digital video sequences, and computes a measure of the magnitude of the visible difference between them. The metric is based on the discrete cosine transform. It incorporates aspects of early visual pro- cessing, including light adaptation, luminance, and chromatic chan- nels; spatial and temporal filtering; spatial frequency channels; con- trast masking; and probability summation. It also includes primitive dynamics of light adaptation and contrast masking. We have applied the metric to digital video sequences corrupted by various typical compression artifacts, and compared the results to quality ratings made by human observers. © 2001 SPIE and IS&T. (DOI: 10.1117/1.1329896)

Journal ArticleDOI
TL;DR: The paper presents applications of the method to complex, steady-state and time-dependent problems which highlight its anisotropic, feature-capturing abilities.

Proceedings ArticleDOI
Kentaro Toyama1, Andrew Blake1
07 Jul 2001
TL;DR: A new exemplar-based, probabilistic paradigm for visual tracking is presented, which provides alternatives to standard learning algorithms by allowing the use of metrics that are not embedded in a vector space and eliminates any need for an assumption of probabilistically pixelwise independence.
Abstract: A new exemplar-based, probabilistic paradigm for visual tracking is presented. Probabilistic mechanisms are attractive because they handle fusion of information, especially temporal fusion, in a principled manner. Exemplars are selected representatives of raw training data, used here to represent probabilistic mixture distributions of object configurations. Their use avoids tedious hand-construction of object models and problems with changes of topology. Using exemplars in place of a parameterized model poses several challenges, addressed here with what we call the "Metric Mixture" (M/sup 2/) approach. The M/sup 2/ model has several valuable properties. Principally, it provides alternatives to standard learning algorithms by allowing the use of metrics that are not embedded in a vector space. Secondly, it uses a noise model that is learned from training data. Lastly, it eliminates any need for an assumption of probabilistic pixelwise independence. Experiments demonstrate the effectiveness of the M/sup 2/ model in two domains tracking walking people using chamfer distances on binary edge images and tracking mouth movements by means of a shuffle distance.

Journal ArticleDOI
TL;DR: This paper proposes to extend the explanatory level for phenotypic evolution from fitness considerations alone to include the topological structure of phenotype space as induced by the genotype-phenotype map, and introduces the mathematical concepts and tools necessary to formalize the notion of accessibility pre-topology relative to which the authors can speak of continuity in the genotypes-phenotypes map and in evolutionary trajectories.

Proceedings ArticleDOI
01 Apr 2001
TL;DR: This paper examines the average page quality over time of pages downloaded during a web crawl of 328 million unique pages and uses the connectivity-based metric PageRank to measure the quality of a page.
Abstract: This paper examines the average page quality over time of pages downloaded during a web crawl of 328 million unique pages. We use the connectivity-based metric PageRank to measure the quality of a page. We show that traversing the web graph in breadth-first search order is a good crawling strategy, as it tends to discover high-quality pages early on in the crawl.

Proceedings Article
07 Jul 2001
TL;DR: It is shown how this relatively simple algorithm coupled with an external file and a diversity approach based on geographical distribution can generate efficiently the Pareto fronts of several difficult test functions.
Abstract: In this paper, we propose a micro genetic algorithm with three forms of elitism for multiobjective optimization. We show how this relatively simple algorithm coupled with an external file and a diversity approach based on geographical distribution can generate efficiently the Pareto fronts of several difficult test functions (both constrained and unconstrained). A metric based on the average distance to the Pareto optimal set is used to compare our results against two evolutionary multiobjective optimization techniques recently proposed in the literature.

Proceedings ArticleDOI
02 Apr 2001
TL;DR: It is shown that by pushing the task of handling obstacles into COD-CLARANS instead of abstracting it at the distance function level, more optimization can be done in the form of a pruning function E'.
Abstract: Clustering in spatial data mining is to group similar objects based on their distance, connectivity, or their relative density in space. In the real world there exist many physical obstacles such as rivers, lakes and highways, and their presence may affect the result of clustering substantially. We study the problem of clustering in the presence of obstacles and define it as a COD (Clustering with Obstructed Distance) problem. As a solution to this problem, we propose a scalable clustering algorithm, called COD-CLARANS. We discuss various forms of pre-processed information that could enhance the efficiency of COD-CLARANS. In the strictest sense, the COD problem can be treated as a change in distance function and thus could be handled by current clustering algorithms by changing the distance function. However, we show that by pushing the task of handling obstacles into COD-CLARANS instead of abstracting it at the distance function level, more optimization can be done in the form of a pruning function E'. We conduct various performance studies to show that COD-CLARANS is both efficient and effective.

Journal ArticleDOI
TL;DR: Probability Binning, as shown here, provides a useful metric for determining the probability that two or more flow cytometric data distributions are different, and can be used to rank distributions to identify which are most similar or dissimilar.
Abstract: Background While several algorithms for the comparison of univariate distributions arising from flow cytometric analyses have been developed and studied for many years, algorithms for comparing multivariate distributions remain elusive. Such algorithms could be useful for comparing differences between samples based on several independent measurements, rather than differences based on any single measurement. It is conceivable that distributions could be completely distinct in multivariate space, but unresolvable in any combination of univariate histograms. Multivariate comparisons could also be useful for providing feedback about instrument stability, when only subtle changes in measurements are occurring. Methods We apply a variant of Probability Binning, described in the accompanying article, to multidimensional data. In this approach, hyper-rectangles of n dimensions (where n is the number of measurements being compared) comprise the bins used for the chi-squared statistic. These hyper-dimensional bins are constructed such that the control sample has the same number of events in each bin; the bins are then applied to the test samples for chi-squared calculations. Results Using a Monte-Carlo simulation, we determined the distribution of chi-squared values obtained by comparing sets of events from the same distribution; this distribution of chi-squared values was identical as for the univariate algorithm. Hence, the same formulae can be used to construct a metric, analogous to a t-score, that estimates the probability with which distributions are distinct. As for univariate comparisons, this metric scales with the difference between two distributions, and can be used to rank samples according to similarity to a control. We apply the algorithm to multivariate immunophenotyping data, and demonstrate that it can be used to discriminate distinct samples and to rank samples according to a biologically-meaningful difference. Conclusion Probability binning, as shown here, provides a useful metric for determining the probability with which two or more multivariate distributions represent distinct sets of data. The metric can be used to identify the similarity or dissimilarity of samples. Finally, as demonstrated in the accompanying paper, the algorithm can be used to gate on events in one sample that are different from a control sample, even if those events cannot be distinguished on the basis of any combination of univariate or bivariate displays. Cytometry 45:47–55, 2001. Published 2001 Wiley-Liss, Inc.

Book
15 Aug 2001
TL;DR: The Nature of Survey Data The Measurement of Variables Error in Measurement The Data Matrix Statistical Procedures for Analysing the Data Matrix Tables and Charts for Categorical Variables Tables and Chart for Metric Variables Data Reduction for categorical variables Data reduction for metric variables Statistical Inference for categric variables Statistical Information Inference of metric variables Testing Hypotheses and Explaining Relationships Strategies for Analyzing a Data Matrix An analysing Qualitative Data
Abstract: The Nature of Survey Data The Measurement of Variables Error in Measurement The Data Matrix Statistical Procedures for Analysing the Data Matrix Tables and Charts for Categorical Variables Tables and Charts for Metric Variables Data Reduction for Categorical Variables Data Reduction for Metric Variables Statistical Inference for Categorical Variables Statistical Inference for Metric Variables Testing Hypotheses and Explaining Relationships Strategies for Analysing a Data Matrix Analysing Qualitative Data

Journal ArticleDOI
TL;DR: It is shown that there are simple methods for estimating and modeling the covariance or variogram components of the product-sum model using data from realizations of spatial-temporal random fields.

Patent
16 Mar 2001
TL;DR: In this paper, the authors proposed a method for controlling a communication parameter in a channel through which data is transmitted between a transmit unit with M transmit antennas (18A-18M) and a receive unit with N receive antennas (34A-34N) by selecting from among proposed mapping schemes an applied mapping scheme.
Abstract: The present invention provides a method for controlling a communication parameter in a channel through which data is transmitted between a transmit unit with M transmit antennas (18A-18M) and a receive unit with N receive antennas (34A-34N) by selecting from among proposed mapping schemes an applied mapping scheme (26) according to which the data is converted into symbols and assigned to transmit signals TSp, p=1...M, which are transmitted from the M transmit antennas. The selection of the mapping scheme is based on a metric (40); in one embodiment the metric is a minimum Euclidean distance dmin,rx of the symbols when received, in another embodiment the metric is a probability of error P(e) in the symbol when received. The method can be employed in communication systems using multi-antenna transmit and receive units of various types including wireless/cellular communication systems, using multiple access techniques such as TDMA, FDMA, CDMA and OFDMA.

Journal ArticleDOI
TL;DR: A self-organizing map (SOM) is computed in the new metric to explore financial statements of enterprises and represents the (local) directions in which the probability of bankruptcy changes the most.
Abstract: We introduce a method for deriving a metric, locally based on the Fisher information matrix, into the data space. A self-organizing map (SOM) is computed in the new metric to explore financial statements of enterprises. The metric measures local distances in terms of changes in the distribution of an auxiliary random variable that reflects what is important in the data. In this paper the variable indicates bankruptcy within the next few years. The conditional density of the auxiliary variable is first estimated, and the change in the estimate resulting from local displacements in the primary data space is measured using the Fisher information matrix. When a self-organizing map is computed in the new metric it still visualizes the data space in a topology-preserving fashion, but represents the (local) directions in which the probability of bankruptcy changes the most.

Proceedings ArticleDOI
02 Apr 2001
TL;DR: A family of metric access methods that are fast and easy to implement on top of existing access methods, such as sequential scan, R-trees and Slim-tree, to elect a set of objects as foci and gauge all other objects with their distances from this set.
Abstract: Designing a new access method inside a commercial DBMS is cumbersome and expensive. We propose a family of metric access methods that are fast and easy to implement on top of existing access methods, such as sequential scan, R-trees and Slim-trees. The idea is to elect a set of objects as foci, and gauge all other objects with their distances from this set. We show how to define the foci set cardinality, how to choose appropriate foci, and how to perform range and nearest-neighbor queries using them, without false dismissals. The foci increase the pruning of distance calculations during the query processing. Furthermore we index the distances from each object to the foci to reduce even triangular inequality comparisons. Experiments on real and synthetic datasets show that our methods match or outperform existing methods. They are up to 10 times faster, and perform up to 10 times fewer distance calculations and disk accesses. In addition, it scales up well, exhibiting sub-linear performance with growing database size.

Proceedings ArticleDOI
07 Oct 2001
TL;DR: The method uses multidimensional scaling and hierarchical cluster analysis to model the semantic categories into which human observers organize images, and devise an image similarity metric that embodies the results, and develop a prototype system.
Abstract: We propose a method for semantic categorization and retrieval of photographic images based on low-level image descriptors. In this method, we first use multidimensional scaling (MDS) and hierarchical cluster analysis (HCA) to model the semantic categories into which human observers organize images. Through a series of psychophysical experiments and analyses, we refine our definition of these semantic categories, and use these results to discover a set of low-level image features to describe each category. We then devise an image similarity metric that embodies our results, and develop a prototype system, which identifies the semantic category of the image and retrieves the most similar images from the database. We tested the metric on a new set of images, and compared the categorization results with that of human observers. Our results provide a good match to human performance, thus validating the use of human judgments to develop semantic descriptors.

Journal ArticleDOI
TL;DR: The variational method is applied to denoise and restore general nonflat image features and Riemannian objects such as metric, distance and Levi--Civita connection play important roles in the models.
Abstract: We develop both mathematical models and computational algorithms for variational denoising and restoration of nonflat image features Nonflat image features are those that live on Riemannian manifolds, instead of on the Euclidean spaces Familiar examples include the orientation feature (from optical flows or gradient flows) that lives on the unit circle S1 , the alignment feature (from fingerprint waves or certain texture images) that lives on the real projective line $\mathbb{RP}^1$, and the chromaticity feature (from color images) that lives on the unit sphere S2 In this paper, we apply the variational method to denoise and restore general nonflat image features Mathematical models for both continuous image domains and discrete domains (or graphs) are constructed Riemannian objects such as metric, distance and Levi--Civita connection play important roles in the models Computational algorithms are also developed for the resulting nonlinear equations The mathematical framework can be applied to res

Proceedings ArticleDOI
09 Jan 2001
TL;DR: A natural integer programming formulation is introduced and it is shown that the integrality gap of its linear relaxation either matches or improves the ratios known for several cases of the metric labeling problem studied until now, providing a unified approach to solving them.
Abstract: We consider approximation algorithms for the metric labeling problem. This problem was introduced in a recent paper by Kleinberg and Tardos [20], and captures many classification problems that arise in computer vision and related fields. They gave an O(log k log log k) approximation for the general case where k is the number of labels and a 2-approximation for the uniform metric case. More recently, Gupta and Tardos [15] gave a 4-approximation for the truncated linear metric, a natural non-uniform metric motivated by practical applications to image restoration and visual correspondence. In this paper we introduce a natural integer programming formulation and show that the integrality gap of its linear relaxation either matches or improves the ratios known for several cases of the metric labeling problem studied until now, providing a unified approach to solving them. In particular, we show that the integrality gap of our LP is bounded by O(log k log log k) for general metric and 2 for the uniform metric thus matching the ratios in [20]. We also develop an algorithm based on our LP that achieves a ratio of 2 + √2 s 3.414 for the truncated linear metric improving the ratio given in [15]. Our algorithm uses the fact that the integrality gap of our LP is 1 on a linear metric. We believe that our formulation has the potential to provide improved approximation algorithms for the general case and other useful special cases. Finally, our formulation admits general non-metric distance functions. This leads to a non-trivial approximation guarantee for a non-metric case that arises in practice [21], namely the truncated quadratic distance function. We note here that there are non-metric distance functions for which no bounded approximation ratio is achievable.

Journal ArticleDOI
TL;DR: This work proposes two new RC delay metrics called delay via two moments (D2M) and effective capacitance metric (ECM), which are virtually as simple and fast as the Elmore metric, but more accurate.
Abstract: For performance optimization tasks such as floorplanning, placement, buffer insertion, wire sizing, and global routing, the Elmore resistance-capacitance (RC) delay metric remains popular due to its simple closed form expression, fast computation speed, and fidelity with respect to simulation. More accurate delay computation methods are typically central processing unit intensive and/or difficult to implement. To bridge this gap between accuracy and efficiency/simplicity, we propose two new RC delay metrics called delay via two moments (D2M) and effective capacitance metric (ECM), which are virtually as simple and fast as the Elmore metric, but more accurate. D2M uses two moments of the impulse response in a simple formula that has high accuracy at the far end of RC lines. ECM captures resistive shielding effects by modeling the downstream capacitance by an "effective capacitance." In contrast, the Elmore metric models this as a lumped capacitance, thereby ignoring resistive shielding. Although not as accurate as D2M, ECM yields consistent performance and may be well-suited to optimization due to its Elmore-like recursive construction.

Patent
24 May 2001
TL;DR: In this paper, a method of handing off a network session between two different access technologies, in response to a quality of service metric, while maintaining the session is described in the claims, specification and drawings.
Abstract: One aspect of the present invention includes a method of handing off a network session between two different access technologies, in response to a quality of service metric, while maintaining the session. Additional aspects of the present invention are described in the claims, specification and drawings.

Journal ArticleDOI
TL;DR: Two common approaches to averaging rotations are compared and it is shown that the two approximative methods can be derived from natural approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.
Abstract: In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong to a non-linear manifold and re-normalization or orthogonalization must be applied to obtain proper rotations. These latter steps have been viewed as ad hoc corrections for the errors introduced by assuming a vector space. The article shows that the two approximative methods can be derived from natural approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.