scispace - formally typeset
Search or ask a question

Showing papers on "Bounding overwatch published in 2004"


Journal ArticleDOI
19 Feb 2004
TL;DR: In the novel approach presented here, a nonlinear transformation of the measurement equation into a higher dimensional space is performed, which yields a tight, possibly complex-shaped, bounding set in a closed-form representation whose parameters can be determined analytically for the measurement step.
Abstract: In this paper, the problem of recursive robot localization based on relative bearing measurements is considered, where unknown but bounded measurement uncertainties are assumed. A common approach is to approximate the resulting set of feasible states by simple-shaped bounding sets such as, e.g., axis-aligned boxes, and calculate the optimal parameters of this approximation based on the measurements and prior knowledge. In the novel approach presented here, a nonlinear transformation of the measurement equation into a higher dimensional space is performed. This yields a tight, possibly complex-shaped, bounding set in a closed-form representation whose parameters can be determined analytically for the measurement step. It is shown that the new bound is superior to commonly used outer bounds.

87 citations


Journal ArticleDOI
Mark Huber1
TL;DR: Bounding chains are a technique that offers three benefits to Markov chain practitioners: a theoretical bound on the mixing time of the chain under restricted conditions, experimental bounds on the chain that are provably accurate and construction of perfect sampling algorithms when used in conjunction with protocols such as coupling from the past as mentioned in this paper.
Abstract: Bounding chains are a technique that offers three benefits to Markov chain practitioners: a theoretical bound on the mixing time of the chain under restricted conditions, experimental bounds on the mixing time of the chain that are provably accurate and construction of perfect sampling algorithms when used in conjunction with protocols such as coupling from the past. Perfect sampling algorithms generate variates exactly from the target distribution without the need to know the mixing time of a Markov chain at all. We present here the basic theory and use of bounding chains for several chains from the literature, analyzing the running time when possible. We present bounding chains for the transposition chain on permutations, the hard core gas model, proper colorings of a graph, the antiferromagnetic Potts model and sink free orientations of a graph.

74 citations


Journal ArticleDOI
TL;DR: It is concluded that the specific methodology of Ha-Duong et al. suffers from logical gaps in the definition and construction of inputs, and hence should not be used in the form proposed.
Abstract: The bounding analysis methodology described by Ha-Duong et al. (this issue) is logically incomplete and invites serious misuse and misinterpretation, as their own example and interpretation illustrate. A key issue is the extent to which these problems are inherent in their methodology, and resolvable by a logically complete assessment (such as Monte Carlo or Bayesian risk assessment), as opposed to being general problems in any risk-assessment methodology. I here attempt to apportion the problems between those inherent in the proposed bounding analysis and those that are more general, such as reliance on questionable expert elicitations. I conclude that the specific methodology of Ha-Duong et al. suffers from logical gaps in the definition and construction of inputs, and hence should not be used in the form proposed. Furthermore, the labor required to do a sound bounding analysis is great enough so that one may as well skip that analysis and carry out a more logically complete probabilistic analysis, one that will better inform the consumer of the appropriate level uncertainty. If analysts insist on carrying out a bounding analysis in place of more thorough assessments, extensive analyses of sensitivity to inputs and assumptions will be essential to display uncertainties, arguably more essential than it would be in full probabilistic analyses.

36 citations


Journal ArticleDOI
TL;DR: A new, fast approach for updating oriented bounding box hierarchies for articulated humanoid models, using a bottom-up approach that approximates existing techniques by assuming a major body axis.
Abstract: We present a new, fast approach for updating oriented bounding box hierarchies for articulated humanoid models, using a bottom-up approach. The algorithm approximates existing techniques by assuming a major body axis. Existence of a major axis allows merging of bounding boxes in a hierarchy approximately but with sufficient accuracy. For scenarios with close proximity, a speed-up by a factor 2 on average is achieved compared to existing techniques.

16 citations


01 Jan 2004
TL;DR: An algorithm that combines different bounding strategies in order both to speed up and to reduce interference tests is presented, and k-dops proved to be more efficient than other bounding volume strategies, such as oriented and axis-aligned bounding boxes.
Abstract: Collision detection has a great importance in a large spectrum of disciplines, such as virtual reality, computer graphics, simulation of physical systems, robotics, solid modelling, and so on. In particular, some applications need to determine in real time if a collision occurs in order to guarantee an interactive behavior of the system. This paper presents an algorithm that combines different bounding strategies in order both to speed up and to reduce interference tests. A recursive space subdivision is performed by uniform grids to bound static objects within the environment; on the other hand, spheretrees, at different levels of detail, are used to encapsulate moving objects. The proposed methodology is compared with an approach that uses discrete orientation polytopes (k-dops). K-dops proved to be more efficient than other bounding volume strategies, such as oriented and axis-aligned bounding boxes. A theoretical evaluation of the algorithm complexity as well as experimental results are provided.

16 citations


Book ChapterDOI
01 Jan 2004
TL;DR: It is proposed to use the spatial, temporal and thematic knowledge about features’ relationships as elements to approximate indeterminate boundaries (TB) and IB are approximated as a set of locations in space-time with aSet of properties characterizing those locations.
Abstract: We propose to use the spatial, temporal and thematic knowledge about features’ relationships as elements to approximate indeterminate boundaries (TB). IB are approximated as a set of locations in space-time with a set of properties characterizing those locations.

12 citations



Proceedings ArticleDOI
16 Jun 2004
TL;DR: This work presents algorithms to generate the cell decomposition, to map from weights to cells, and to efficiently compute the necessary data structures for deriving bounding volumes from the weight vectors rather than the generated geometry.
Abstract: Bounding volumes are crucial for culling in interactive graphics applications. For dynamic shapes, computing a bounding volume for each frame could be very expensive. We analyze the situation for a particular class of dynamic geometry, namely, shapes resulting from the linear interpolation of several base shapes. The space of weights for the linear combination can be decomposed into cells so that in each cell a particular vertex is maximal (resp. minimal) in a given direction. This cell decomposition of the weight space allows deriving bounding volumes from the weight vectors rather than the generated geometry. We present algorithms to generate the cell decomposition, to map from weights to cells, and to efficiently compute the necessary data structures. This approach to computing bounding volumes for dynamic shapes proves to be beneficial if the geometry representation is large compared to the number of base shapes

8 citations


Proceedings Article
21 Apr 2004
TL;DR: The performance of the Taylor model methods is compared with other state of the art validated tools including centered forms and mean value forms and with the computation of remainder bounds via high-order interval automatic differentiation.
Abstract: Taylor model methods represent a combination of high-order multivariate automatic differentiation and the simultaneous computation of an interval remainder bound enclosing approximation error over a given domain. This method allows a far-reaching suppression of the dependency problem common to interval methods, and can thus often be used for precise range bounding problems. We compare the performance of the method with other state of the art validated tools including centered forms and mean value forms. We also compare with the computation of remainder bounds via high-order interval automatic differentiation.

6 citations


Journal ArticleDOI
TL;DR: Parameter estimation of an autoregressive movmg average (ARMA) model is discussed in this paper by using bounding approach, where there exist a set of model parameters and structure error corresponding to each input.

5 citations


Proceedings ArticleDOI
27 Jun 2004
TL;DR: Two indexing approaches using Grid files and R*-trees are developed and a number of spatial queries are illustrated based on these indexing schemes.
Abstract: We review the basic approach to modelling spatial uncertainty with fuzzy minimum bounding rectangles. Then two indexing approaches using Grid files and R*-trees are developed. Finally a number of spatial queries are illustrated based on these indexing schemes.

01 Jul 2004
TL;DR: The theory of the partial order and its properties are developed, how it is related to monotonicity of matrices and to lumpability, and how it can be efficiently applied to certain compositionally defined models are shown.
Abstract: An ongoing challenge in Markovian analysis is the so-called “state-space explosion,” which is the exponential growth of the size of the state space of a model as the model size increases. We introduce a partial order on the states of a model to facilitate aggregation of states in order to reduce model size while bounding the error introduced by the aggregation. The partial order implies that the current and future behavior of the model is better in one state than another. We develop the theory of the partial order and its properties, show how it is related to monotonicity of matrices and to lumpability, and show how it can be efficiently applied to certain compositionally defined models.


Journal ArticleDOI
TL;DR: A number of properties of the algorithm are highlighted such as the decrease of some normed estimation error, the shrinkage of the parameters outer-bounding set and the acceptability of the final output error.


Journal ArticleDOI
28 Jun 2004
TL;DR: One of the programming problems in the 2002 Pacific Northwest regional ACM ICPC contest provides a new way to teach backtracking and also provides a very powerful example of a forward-looking bounding function.
Abstract: One of the programming problems in the 2002 Pacific Northwest regional ACM ICPC contest provides a new way to teach backtracking and also provides a very powerful example of a forward-looking bounding function. This article presents the problem, the bounding function, and timing information of implementations with and without the bounding function. It also provides the URL for access to the programs themselves.

Journal ArticleDOI
TL;DR: A novel algorithm is proposed, based on solving families of relaxed linear programming problems, which allows the incorporation of additional constraints derived from physical insight, which provides tighter bounds while avoiding explicit enumeration of all possible mode trajectories.

Posted Content
TL;DR: Using Talagrand's concentration inequalities for empirical processes, new sharper bounds on the generalization error of combined classifiers are obtained that take into account both the empirical distribution of "classification margins'' and an "approximate dimension" of the classifiers.
Abstract: A problem of bounding the generalization error of a classifier f in H, where H is a "base" class of functions (classifiers), is considered. This problem frequently occurs in computer learning, where efficient algorithms of combining simple classifiers into a complex one (such as boosting and bagging) have attracted a lot of attention. Using Talagrand's concentration inequalities for empirical processes, we obtain new sharper bounds on the generalization error of combined classifiers that take into account both the empirical distribution of "classification margins'' and an "approximate dimension" of the classifiers and study the performance of these bounds in several experiments with learning algorithms.