scispace - formally typeset
Search or ask a question

Showing papers on "Euclidean distance published in 1993"


Proceedings ArticleDOI
19 Oct 1993
TL;DR: It is shown that a cepstral based algorithm exhibits a high degree of independence to levels of background noise and successful speech end-pointing can be achieved via thresholding cepStral distance measures.
Abstract: This paper reviews algorithms which rely on the analysis of time domain samples to provide energy and zero-crossing rates, together with more recent algorithms that use different methods for speech detection. We then examine a different approach using cepstral analysis, showing a high degree of amplitude and noise level independence. We show that a cepstral based algorithm exhibits a high degree of independence to levels of background noise and successful speech end-pointing can be achieved via thresholding cepstral distance measures. Through the use of a noise code-book we are able to provide a successful reference for Euclidean distance measures in the voice detection algorithm. >

208 citations


Journal ArticleDOI
01 Jul 1993
TL;DR: Two combined unequal error protection (UEP) coding and modulation schemes are proposed, based on partitioning a signal constellation into disjoint subsets in which the most important data sequence is encoded, using most of the available redundancy, to specify a sequence of subsets.
Abstract: Two combined unequal error protection (UEP) coding and modulation schemes are proposed. The first method multiplexes different coded signal constellations, with each coded constellation providing a different level of error protection. In this method, a codeword specifies the multiplexing rule and the choice of the codeword from a fixed codebook is used to convey additional important information. The decoder determines the multiplexing rule before decoding the rest of the data. The second method is based on partitioning a signal constellation into disjoint subsets in which the most important data sequence is encoded, using most of the available redundancy, to specify a sequence of subsets. The partitioning and code construction is done to maximize the minimum Euclidean distance between two different valid subset sequences. This leads to ways of partitioning the signal constellations into subsets. The less important data selects a sequence of signal points to be transmitted from the subsets. A side benefit of the proposed set partitioning procedure is a reduction in the number of nearest neighbors, sometimes even over the uncoded signal constellation. >

200 citations


Proceedings ArticleDOI
03 Nov 1993
TL;DR: It is proved that for any dimension d there exists a polynomial time algorithm for counting integral points in polyhedra in the d-dimensional Euclidean space.
Abstract: We prove that for any dimension d there exists a polynomial time algorithm for counting integral points in polyhedra in the d-dimensional Euclidean space. Previously such algorithms were known for dimensions d=1,2,3, and 4 only. >

193 citations


Journal ArticleDOI
S.-W. Ra1, J.-K. Kim1
TL;DR: A new fast search algorithm for vector quantization using the mean of image vectors is proposed, showing that the number of calculations can be reduced to as low as a fourth the number achievable by an algorithm known as the partial distance method.
Abstract: A new fast search algorithm for vector quantization using the mean of image vectors is proposed. The codevectors are sorted according to their component means, and the search for the codevector having the minimum Euclidean-distance to a given input vector starts with the one having the minimum mean-distance to it, making use of our observation that the two codevectors are close to each other in most real images. The search is then made to terminate as soon as a simple yet novel test reports that any remaining vector in the codebook should have a larger Euclidean distance. Simulations show that the number of calculations can be reduced to as low as a fourth the number achievable by an algorithm known as the partial distance method. >

190 citations


Journal ArticleDOI
TL;DR: This work suggests variants for 2, 3 and arbitrary dimensions that are separable, suitable for various parallel architectures, and presents a 4-scan algorithm for 3-dimensional images.

154 citations


10 Sep 1993
TL;DR: In this paper, the Euclidean distance minimization in the complex plane optimizes a wide class of correlation metrics for filters implemented on realistic devices, including spatial light modulators, additive input noise (white or colored), spatially nonuniform filter modulators and additive correlation detection noise (including signaldependent noise).
Abstract: Minimizing a Euclidean distance in the complex plane optimizes a wide class of correlation metrics for filters implemented on realistic devices. The algorithm searches over no more than two real scalars (gain and phase). It unifies a variety of previous solutions for special cases (e.g., a maximum signal-to-noise ratio with colored noise and a real filter and a maximum correlation intensity with no noise and a coupled filter). It extends optimal partial information filter theory to arbitrary spatial light modulators (fully complex, coupled, discrete, finite contrast ratio, and so forth), additive input noise (white or colored), spatially nonuniform filter modulators, and additive correlation detection noise (including signaldependent noise). An appendix summarizes the algorithm as it is implemented in available computer code.

143 citations


Journal ArticleDOI
TL;DR: Minimizing a Euclidean distance in the complex plane optimizes a wide class of correlation metrics for filters implemented on realistic devices and extends optimal partial information filter theory to arbitrary spatial light modulators.
Abstract: Minimizing a Euclidean distance in the complex plane optimizes a wide class of correlation metrics for filters implemented on realistic devices. The algorithm searches over no more than two real scalars (gain and phase). It unifies a variety of previous solutions for special cases (e.g., a maximum signal-to-noise ratio with colored noise and a real filter and a maximum correlation intensity with no noise and a coupled filter). It extends optimal partial information filter theory to arbitrary spatial light modulators (fully complex, coupled, discrete, finite contrast ratio, and so forth), additive input noise (white or colored), spatially nonuniform filter modulators, and additive correlation detection noise (including signal dependent noise).

107 citations


Journal ArticleDOI
TL;DR: The skeletonization algorithm includes a beautifying step and a pruning step, which favour the use of the skeleton for shape analysis tasks, and is driven by the Euclidean distance map of the pattern.

97 citations


Proceedings ArticleDOI
Gabriel Taubin1
11 May 1993
TL;DR: A more complex, and better, approximation to the Euclidean distance is introduced from a point to an alegbraic curve or surface, and it is shown that this new approximate distance produces results of the same quality as those based on the exact Euclidesan distance, and much better than those obtained using other available methods.
Abstract: The author describes a new method to improve the algebraic surface fitting process by better approximating the Euclidean distance from a point to the surface. In the past they have used a simple first order approximation of the Euclidean distance from a point to an implicit curve or surface which yielded good results in the case of unconstrained algebraic curves or surfaces, and reasonable results in the case of bounded algebraic curves and surfaces. However, experiments with the exact Euclidean distance have shown the limitations of this simple approximation. Here, a more complex, and better, approximation to the Euclidean distance is introduced from a point to an alegbraic curve or surface. It is shown that this new approximate distance produces results of the same quality as those based on the exact Euclidean distance, and much better than those obtained using other available methods. >

96 citations


Journal ArticleDOI
TL;DR: A weighted Euclidean distance model for analyzing three-way proximity data is proposed that incorporates a latent class approach and removes the rotational invariance of the classical multidimensional scaling model retaining psychologically meaningful dimensions, and drastically reduces the number of parameters in the traditional INDSCAL model.
Abstract: A weighted Euclidean distance model for analyzing three-way proximity data is proposed that incorporates a latent class approach. In this latent class weighted Euclidean model, the contribution to the distance function between two stimuli is per dimension weighted identically by all subjects in the same latent class. This model removes the rotational invariance of the classical multidimensional scaling model retaining psychologically meaningful dimensions, and drastically reduces the number of parameters in the traditional INDSCAL model. The probability density function for the data of a subject is posited to be a finite mixture of spherical multivariate normal densities. The maximum likelihood function is optimized by means of an EM algorithm; a modified Fisher scoring method is used to update the parameters in the M-step. A model selection strategy is proposed and illustrated on both real and artificial data.

92 citations



Journal ArticleDOI
TL;DR: In this article, the expected distance between two uniformly distributed random points in a rectangle or in a rectangular parallelepiped is computed under three different metrics: the Manhattan metric, the Euclidean metric, and the Chebychev metric.
Abstract: In this paper, the expected distance between two uniformly distributed random points in a rectangle or in a rectangular parallelepiped is computed under three different metrics: the Manhattan metric, the Euclidean metric, and the Chebychev metric.

Journal ArticleDOI
TL;DR: The Letter presents analogue VLSI circuits which can calculate the Euclidean norm, and compute programmable width basis functions of this norm and can be combined with existing ‘conventional’ neural circuits, to implement complete RBF networks.
Abstract: Radial basis function (RBF) networks are finding increasing use in applications involving multidimensional function interpolation and pattern classification. The major obstacle to the wider use of RBFs is the complex nature of their calculations, in particular the requirement to evaluate Euclidean distance repeatedly. Almost no hardware implementations have been reported. The Letter presents analogue VLSI circuits which can calculate the Euclidean norm, and compute programmable width basis functions of this norm. These novel circuits can be combined with existing ‘conventional’ neural circuits, to implement complete RBF networks.

Journal ArticleDOI
TL;DR: In this paper, the standard problem of extending a mapping to the whole space is shown to be solvable for any closed subset for finite-dimensional metric compacta and -complexes.
Abstract: It is determined under what conditions the standard problem of extension of a mapping to the whole space is solvable for any closed subset . For finite-dimensional metric compacta and -complexes this is equivalent to the system of inequalities -. The result is applied to finding conditions for general position of a compactum in a Euclidean space.

Journal ArticleDOI
TL;DR: This work proposes and demonstrates skeletonization using path-based metrics which are a better approximation of the Euclidean metric and achieves a good performance on sequential processors by processing each pixel only once in the calculations of binary and grey-value skeletons.
Abstract: A metric defines the distance between any two points. The “natural” metrics of the digital world do not approximate the Euclidean metric of the continuous world well. Skeletonization (sometimes named topology preserving shrinking or homotopic thinning) is one example in which this leads to unacceptable results. In the present work we propose and demonstrate skeletonization using path-based metrics which are a better approximation of the Euclidean metric. Moreover, we achieve a good performance on sequential processors by processing each pixel only once in the calculations of binary (Hilditch) and grey-value (upper) skeletons.

Proceedings ArticleDOI
01 Jul 1993
TL;DR: This work gives an algorithm to compute a non-crossing matching whose total length is at least 2/π of the longest (possiblycrossing) matching, and shows that the ratio 1/π between thenon-Crossing and crossing matching is the best possible.
Abstract: We study some geometric maximization problems in the Euclidean plane under the non-crossing constraint. Given a set V of 2n points in general position in the plane, we investigate the following geometric configurations using straight-line segments and the Euclidean norm: (i) longest non-crossing matching, (ii) longest non-crossing hamiltonian path, (iii) longest non-crossing spanning tree. We propose simple and efficient algorithms to approximate these structures within a constant factor of optimality. Somewhat surprisingly, we also show that our bounds are within a constant factor of optimality even without the non-crossing constraint. For instance, we give an algorithm to compute a non-crossing matching whose total length is at least 2/π of the longest (possibly crossing) matching, and show that the ratio 2/π between the non-crossing and crossing matching is the best possible. Perhaps due to their utter simplicity, our methods also seem more general and amenable to applications in other similar contexts.

Journal ArticleDOI
TL;DR: A finite recursive algorithm is proposed that is not based on the simplicial decomposition of convex sets and does not require to solve systems of linear equations to find the minimum Euclidean norm point.
Abstract: For a given pair of finite point setsP andQ in some Euclidean space we consider two problems: Problem 1 of finding the minimum Euclidean norm point in the convex hull ofP and Problem 2 of finding a minimum Euclidean distance pair of points in the convex hulls ofP andQ. We propose a finite recursive algorithm for these problems. The algorithm is not based on the simplicial decomposition of convex sets and does not require to solve systems of linear equations.

Patent
Lee-Fang Wei1
19 Apr 1993
TL;DR: In this paper, a trellis-coded modulation system is presented in which the output of the encoder is used to select a subset of a multidimensional QAM constellation, such that the minimum square Euclidean distance between valid sequences of successive selected subsets is maximized.
Abstract: A trellis-coded modulation system is provided in which the output of the trellis encoder is used to select a subset of a multidimensional QAM constellation. The selection process is performed such that a) the minimum square Euclidean distance between valid sequences of successive selected subsets is maximized, b) the resulting code is rotationally invariant, and c) the selected subset corresponding to a transition of the trellis encoder from a present state i to a different next state j is different from the selected subset that corresponds to a transition of the trellis encoder from a present state j to a next state i.

Proceedings ArticleDOI
15 Jun 1993
TL;DR: A transformation metric to measure the similarity between 3-D models and 2-D images is proposed and a simple, closed-form solution for this metric is presented.
Abstract: A transformation metric to measure the similarity between 3-D models and 2-D images is proposed. The transformation metric measures the amount of affine deformation applied to the object to produce the given image. A simple, closed-form solution for this metric is presented. This solution is optimal in transformation space, and it is used to bound the image metric from both above and below. The transformation metric can be used in several different ways in recognition and classification tasks. >

Proceedings ArticleDOI
14 Sep 1993
TL;DR: A new weighted minimum distance classifier which uses the discriminately power and variance of features to increase the interclass separability while they decrease the intraclass dissimilarity is presented.
Abstract: This paper presents a new weighted minimum distance classifier which uses the discriminately power and variance of features. The weights increase the interclass separability while they decrease the intraclass dissimilarity. Two examples are given to show the effectiveness of the method. >

Journal ArticleDOI
TL;DR: In this article, two forms of a mathematical model for tasks involving three alternatives are given, and the model can be applied to triad discrimination, preferential choice and two-stimulus identification.

Proceedings ArticleDOI
23 Jun 1993
TL;DR: An algorithm for computing the Euclidean distance from the boundary of a given digitized shape is presented and the distance is calculated with sub-pixel accuracy.
Abstract: An algorithm for computing the Euclidean distance from the boundary of a given digitized shape is presented. The distance is calculated with sub-pixel accuracy. The algorithm is based on an equal distance contour evolution process. The moving contour is embedded as a level set in a time varying function of higher dimension. This representation of the evolving contour makes possible the use of an accurate and stable numerical scheme, due to Osher and Sethian.

02 Jan 1993
TL;DR: This thesis presents an algorithm to find a k-link path with Euclidean length at most 1 + $\epsilon$ times the length of the shortest k- link path.
Abstract: A bicriteria optimal path simultaneously satisfies two bounds on two measures of path quality. The complexity of finding such a path depends on the particular choices of path quality. This thesis studies bicriteria path problems in a geometric setting using several pairs of path quality, including: path length measured according to different norms $(L\sb{p}$ and $L\sb{q});$ Euclidean length within two or more classes of regions; total turn and Euclidean length; total turn and number of links; and Euclidean length and number of links. For several cases, finding the bicriteria optimal path is shown to be NP-hard. These NP-hard cases include minimizing path length in two different norms, minimizing travel through two regions, and minimizing length and total turn. In the last case, an $O(En\sp2N\sp2$) pseudo-polynomial time algorithm to find an approximate answer is presented. In contrast, when the two measures of path quality are total turn and number of links, an $O(E\sp3n$log$\sp2n)$ exact algorithm is given. A main result of this thesis examines minimizing the Euclidean length and number of links of a path. When the geometric setting of this problem is a polygon without holes, this thesis presents an $O(n\sp3k\sp3$log$(Nk/\epsilon\sp{1/k}))$ algorithm to find a k-link path with Euclidean length at most 1 + $\epsilon$ times the length of the shortest k-link path. A faster algorithm for a relaxed case, when the output path is allowed to have 2k links, is presented for a polygon with or without holes. Finally, some approximation algorithms are outlined for finding a minimum link path among polyhedral obstacles.

Book ChapterDOI
13 Sep 1993
TL;DR: This paper presents an efficient technique for dense stereo correspondence using a new Polychromatic Block Matching that provides the best information for stereo when using the Euclidean distance for color measurement.
Abstract: Only few problems in computer vision have been investigated more vigorously than stereo. Nevertheless, almost all methods use only gray values and most of them are feature-based techniques, i.e., they produce only sparse depth maps. This paper presents an efficient technique for dense stereo correspondence using a new Polychromatic Block Matching. Four different color models (RGB, XYZ, I1I2I3, HSI) and three different color measures have been investigated with regard to their suitability for stereo matching. As a result the I1I2I3 color space provides the best information for stereo when using the Euclidean distance for color measurement.

Proceedings ArticleDOI
17 Jan 1993
TL;DR: A class of algorithms performing Maximum Likelihood Sequence Detection under various structural and complexity constraints is derived (BSC or AWGN) and the Viterbi Algorithm (VA) is the unconstrained solution denoted SA(1,S).
Abstract: A class of algorithms performing Maximum Likelihood Sequence Detection under various structural and complexity constraints is derived (BSC or AWGN) Complexity is measured by the number of paths used By partitioning the S states into C classes and selecting B paths into each class, the signals closest to the received one shall be selected and hence NP=BC paths are used This class of algorithms has the name SA(B,C) (SA=Search Algorithm) and the Viterbi Algorithm (VA) is the unconstrained solution denoted SA(1,S) An analysis method concerning the probability of the first error event at large SNR is developed for the whole SA(B,C) family and results in an analysis tool named the Vector Euclidean Distance (VED) of which the traditional Euclidean Distance (ED) is a scalar special case The smallest number of paths resulting in the same asymptotic detection performance as the VA is calculated for several classes of trellis codes

Journal ArticleDOI
TL;DR: In this paper, the authors give a short proof of the classical Stallings theorem that every n-dimensional cellular complexe embeds up to homotopy in the 2n-dimensional Euclidean space.
Abstract: We give a short proof of the classical Stallings theorem that every finite n-dimensional cellular complexe embeds up to homotopy in the 2n-dimensional Euclidean space. As an application we solve a problem of M. Kreck

Journal ArticleDOI
TL;DR: An optical-disk-based system for handwritten character recognition that computes the Euclidean distance between an unknown input and 650 stored patterns at a demonstrated rate of 26,000 pattern comparisons/s is demonstrated.
Abstract: We describe two optical systems based on the radial basis function approach to pattern classification. An optical-disk-based system for handwritten character recognition is demonstrated. The optical system computes the Euclidean distance between an unknown input and 650 stored patterns at a demonstrated rate of 26,000 pattern comparisons/s. The ultimate performance of this system is limited by optical-disk resolution to 1011 binary operations/s. An adaptive system is also presented that facilitates on-line learning and provides additional robustness.



Proceedings ArticleDOI
01 Jul 1993
TL;DR: It is shown that it is not possible to transform, by means of a bijection from the plane into itself, the computation of such Voronoi diagrams to the computations of Euclidean Voronoa diagrams (except in the trivial case of the distance being affinely equivalent to the Euclidea distance).
Abstract: Voronoi diagrams in the plane for strictly convex distances have been studied in [3], [5] and [7]. These distances induce the usual topology in the plane and, moreover, the Voronoi diagrams they produce enjoy many of the good properties of Euclidean Voronoi diagrams. Nevertheless, we show (Th.1) that it is not possible to transform, by means of a bijection from the plane into itself, the computation of such Voronoi diagrams to the computation of Euclidean Voronoi diagrams (except in the trivial case of the distance being affinely equivalent to the Euclidean distance). The same applies if we want to compute just the topological shape of a Voronoi diagram of at least four points (Th.2).Moreover, for any strictly convex distance not affinely equivalent to the Euclidean distance, new, non Euclidean shapes appear for Voronoi diagrams, and we show a general construction of a nine-point Voronoi diagram with non Euclidean shape (Th.3).