scispace - formally typeset
Search or ask a question

Showing papers on "Bounding overwatch published in 2010"


Proceedings Article
11 Aug 2010
TL;DR: This work builds a prototype system that demonstrates that radio distance bounding protocols can be implemented to match the strict processing that these protocols require, and implements a prover that is able to receive, process and transmit signals in less than 1ns.
Abstract: One of the main obstacles for the wider deployment of radio (RF) distance bounding is the lack of platforms that implement these protocols. We address this problem and we build a prototype system that demonstrates that radio distance bounding protocols can be implemented to match the strict processing that these protocols require. Our system implements a prover that is able to receive, process and transmit signals in less than 1ns. The security guarantee that a distance bounding protocol built on top of this system therefore provides is that a malicious prover can, at most, pretend to be about 15cm closer to the verifier than it really is. To enable such fast processing at the prover, we use specially implemented concatenation as the prover's processing function and show how it can be integrated into a distance bounding protocol. Finally, we show that functions such as XOR and the comparison function, that were used in a number of previously proposed distance bounding protocols, are not best suited for the implementation of radio distance bounding.

235 citations


Posted Content
07 Jul 2010
TL;DR: An equivalence result for network capacity is described that a collection of demands can be met on the given network if and only if it can be meet on another network where each noisy link is replaced by a noiseless bit pipe with throughput equal to the noisy link capacity.
Abstract: A family of equivalence tools for bounding network capacities is introduced. Part I treats networks of point-to-point channels. The main result is roughly as follows. Given a network of noisy, independent, memoryless point-to-point channels, a collection of communication demands can be met on the given network if and only if it can be met on another network where each noisy channel is replaced by a noiseless bit pipe with throughput equal to the noisy channel capacity. This result was known previously for the case of a single-source multicast demand. The result given here treats general demands -- including, for example, multiple unicast demands -- and applies even when the achievable rate region for the corresponding demands is unknown in the noiseless network. In part II, definitions of upper and lower bounding channel models for general channels are introduced. By these definitions, a collection of communication demands can be met on a network of independent channels if it can be met on a network where each channel is replaced by its lower bounding model andonly if it can be met on a network where each channel is replaced by its upper bounding model. This work derives general conditions under which a network of noiseless bit pipes is an upper or lower bounding model for a multiterminal channel. Example upper and lower bounding models for broadcast, multiple access, and interference channels are given. It is then shown that bounding the difference between the upper and lower bounding models for a given channel yields bounds on the accuracy of network capacity bounds derived using those models. By bounding the capacity of a network of independent noisy channels by the network coding capacity of a network of noiseless bit pipes, this approach represents one step towards the goal of building computational tools for bounding network capacities.

36 citations


Journal ArticleDOI
TL;DR: The concept of a bounding operation is introduced and a new definition of the rate of convergence for geometric branch-and-bound methods is proposed and justified by some numerical experiments using the Weber problem on the plane with some negative weights.
Abstract: Geometric branch-and-bound solution methods, in particular the big square small square technique and its many generalizations, are popular solution approaches for non-convex global optimization problems. Most of these approaches differ in the lower bounds they use which have been compared empirically in a few studies. The aim of this paper is to introduce a general convergence theory which allows theoretical results about the different bounds used. To this end we introduce the concept of a bounding operation and propose a new definition of the rate of convergence for geometric branch-and-bound methods. We discuss the rate of convergence for some well-known bounding operations as well as for a new general bounding operation with an arbitrary rate of convergence. This comparison is done from a theoretical point of view. The results we present are justified by some numerical experiments using the Weber problem on the plane with some negative weights.

31 citations


Journal ArticleDOI
TL;DR: This paper modifies Jane and Laih's (2008) exact and direct algorithm to provide sequences of upper bounds and lower bounds that converge to the NP-hard multi-state two-terminal reliability.

21 citations


Patent
09 Feb 2010
TL;DR: In this article, a caption detection system was proposed, where all detected caption boxes over time for one caption area are identical, thereby reducing temporal instability and inconsistency, by grouping candidate pixels in the 3D spatio-temporal space and generating a 3D bounding box for each caption area.
Abstract: A caption detection system wherein all detected caption boxes over time for one caption area are identical, thereby reducing temporal instability and inconsistency. This is achieved by grouping candidate pixels in the 3D spatiotemporal space and generating a 3D bounding box for one caption area. 2D bounding boxes are obtained by slicing the 3D bounding boxes, thereby reducing temporal instability as all 2D bounding boxes corresponding to a caption area are sliced from one 3D bounding box and are therefore identical over time.

19 citations


Proceedings Article
11 Jul 2010
TL;DR: A new class of partitioning heuristics from first-principles geared for likelihood queries are derived, demonstrating their impact on a number of benchmarks for probabilistic reasoning and showing that the results are competitive (often superior) to state-of-the-art bounding schemes.
Abstract: Mini-Bucket Elimination (MBE) is a well-known approximation algorithm deriving lower and upper bounds on quantities of interest over graphical models. It relies on a procedure that partitions a set of functions, called bucket, into smaller subsets, called mini-buckets. The method has been used with a single partitioning heuristic throughout, so the impact of the partitioning algorithm on the quality of the generated bound has never been investigated. This paper addresses this issue by presenting a framework within which partitioning strategies can be described, analyzed and compared. We derive a new class of partitioning heuristics from first-principles geared for likelihood queries, demonstrate their impact on a number of benchmarks for probabilistic reasoning and show that the results are competitive (often superior) to state-of-the-art bounding schemes.

18 citations


Proceedings ArticleDOI
25 Jun 2010
TL;DR: This paper presents a new approach to conservative bounding of displaced Bézier patches that combines efficient normal bounding techniques, min-max mipmap hierarchies and oriented bounding boxes, which results in substantially faster convergence for the bounding volumes of displaced surfaces.
Abstract: In this paper, we present a new approach to conservative bounding of displaced Bezier patches. These surfaces are expected to be a common use case for tessellation in interactive and real-time rendering. Our algorithm combines efficient normal bounding techniques, min-max mipmap hierarchies and oriented bounding boxes. This results in substantially faster convergence for the bounding volumes of displaced surfaces, prior to tessellation and displacement shading. Our work can be used for different types of culling, ray tracing, and to sort higher order primitives in tiling architectures. For our hull shader implementation, we report performance benefits even for moderate tessellation rates.

18 citations


Posted Content
TL;DR: This work derives general conditions under which a network of noiseless bit pipes is an upper or lower bounding model for a multiterminal channel and bounds on the accuracy of network capacity bounds derived using those models are shown.
Abstract: A family of equivalence tools for bounding network capacities is introduced. Part I treats networks of point-to-point channels. The main result is roughly as follows. Given a network of noisy, independent, memoryless point-to-point channels, a collection of communication demands can be met on the given network if and only if it can be met on another network where each noisy channel is replaced by a noiseless bit pipe with throughput equal to the noisy channel capacity. This result was known previously for the case of a single-source multicast demand. The result given here treats general demands -- including, for example, multiple unicast demands -- and applies even when the achievable rate region for the corresponding demands is unknown in the noiseless network. In part II, definitions of upper and lower bounding channel models for general channels are introduced. By these definitions, a collection of communication demands can be met on a network of independent channels if it can be met on a network where each channel is replaced by its lower bounding model and only if it can be met on a network where each channel is replaced by its upper bounding model. This work derives general conditions under which a network of noiseless bit pipes is an upper or lower bounding model for a multiterminal channel. Example upper and lower bounding models for broadcast, multiple access, and interference channels are given. It is then shown that bounding the difference between the upper and lower bounding models for a given channel yields bounds on the accuracy of network capacity bounds derived using those models. By bounding the capacity of a network of independent noisy channels by the network coding capacity of a network of noiseless bit pipes, this approach represents one step towards the goal of building computational tools for bounding network capacities.

16 citations


Posted Content
TL;DR: This paper forms a new trace property called Secure Distance Bounding (SDB) that protocol executions must satisfy and classifies the scenarios in which these protocols can operate considering the (dis)honesty of nodes and location of the attacker in the network.
Abstract: Distance bounding protocols are used by nodes in wireless networks to calculate upper bounds on their distances to other nodes. However, dishonest nodes in the network can turn the calculations both illegitimate and inaccurate when they participate in protocol executions. It is important to analyze protocols for the possibility of such violations. Past efforts to analyze distance bounding protocols have only been manual. However, automated approaches are important since they are quite likely to find flaws that manual approaches cannot, as witnessed in literature for analysis pertaining to key establishment protocols. In this paper, we use the constraint solver tool to automatically analyze distance bounding protocols. We first formulate a new trace property called Secure Distance Bounding (SDB) that protocol executions must satisfy. We then classify the scenarios in which these protocols can operate considering the (dis)honesty of nodes and location of the attacker in the network. Finally, we extend the constraint solver so that it can be used to test protocols for violations of SDB in these scenarios and illustrate our technique on some published protocols.

16 citations


Journal ArticleDOI
TL;DR: This article develops a practical method for computing the peak outputs of linear time-invariant systems for a class of possible sets characterised with many bounding conditions on the two- and/or the infinity-norms of the inputs and their slopes.
Abstract: The evaluation of peak outputs is an essential component of control systems design in which the outputs are required to remain within their prescribed bounds in the presence of all possible inputs. Characterising a possible set with many bounding conditions can reduce conservatism, thereby yielding a better design. However, this gives rise to difficulty in computing the peak outputs using analytical techniques. This article develops a practical method for computing the peak outputs of linear time-invariant systems for a class of possible sets characterised with many bounding conditions on the two- and/or the infinity-norms of the inputs and their slopes. The original infinite-dimensional convex optimisation problem is approximated as a large-scale convex programme defined in a Euclidean space with sparse matrices, which can be solved efficiently in practice. A case study of control design for a building subject to seismic disturbances is reinvestigated, where three bounding conditions are used. The numeri...

15 citations


Book ChapterDOI
20 Sep 2010
TL;DR: It is proved that the bounding problem is not a k-safety property for any k (even when q is fixed, for the Shannon-entropy-based definition with the uniform distribution), and therefore is not amenable to the self-composition technique that has been successfully applied to checking non-interference.
Abstract: Researchers have proposed formal definitions of quantitative information flow based on information theoretic notions such as the Shannon entropy, the min entropy, the guessing entropy, and channel capacity. This paper investigates the hardness of precisely checking the quantitative information flow of a program according to such definitions. More precisely, we study the "bounding problem" of quantitative information flow, defined as follows: Given a program M and a positive real number q, decide if the quantitative information flow of M is less than or equal to q. We prove that the bounding problem is not a k-safety property for any k (even when q is fixed, for the Shannon-entropy-based definition with the uniform distribution), and therefore is not amenable to the self-composition technique that has been successfully applied to checking non-interference. We also prove complexity theoretic hardness results for the case when the program is restricted to loop-free boolean programs. Specifically, we show that the problem is PP-hard for all the definitions, showing a gap with non-interference which is coNP-complete for the same class of programs. The paper also compares the results with the recently proved results on the comparison problems of quantitative information flow.

Patent
07 Apr 2010
TL;DR: In this paper, an efficient normal bounding technique was used, together with min-max mipmap hierarchies and oriented bounding boxes, to provide substantially faster convergence for the bounding volumes of the displaced surface.
Abstract: Hierarchical bounding of displaced parametric surfaces may be a very common use case for tessellation in interactive and real-time rendering. An efficient normal bounding technique may be used, together with min-max mipmap hierarchies and oriented bounding boxes. This provides substantially faster convergence for the bounding volumes of the displaced surface, without tessellating and displacing the surface in some embodiments. This bounding technique can be used for different types of culling, ray tracing, and to sort higher order primitives in tiling architectures.

Journal ArticleDOI
TL;DR: An improved method of candidate attainable region (AR) construction, based on an existing bounding hyperplanes approach, is presented in this paper, which uses a plane rotation about existing extreme points.
Abstract: An improved method of candidate attainable region (AR) construction, based on an existing bounding hyperplanes approach, is presented. The method uses a plane rotation about existing extreme points...

Journal ArticleDOI
TL;DR: A culling algorithm that serves as a generalization of the Separating Axis Theorem for non parallel axes, based on the well‐known concept of support planes is presented, demonstrating its high culling efficiency and in its application, significant improvement of timing performance with different types of bounding volumes and support plane mappings for rigid body simulations.
Abstract: In this paper we present a new method for improving the performance of the widely used Bounding Volume Hierarchies for collision detection. The major contribution of our work is a culling algorithm that serves as a generalization of the Separating Axis Theorem for non parallel axes, based on the well-known concept of support planes. We also provide a rigorous definition of support plane mappings and implementation details regarding the application of the proposed method to commonly used bounding volumes. The paper describes the theoretical foundation and an overall evaluation of the proposed algorithm. It demonstrates its high culling efficiency and in its application, significant improvement of timing performance with different types of bounding volumes and support plane mappings for rigid body simulations.


Proceedings ArticleDOI
14 Oct 2010
TL;DR: A novel Bayesian estimator for the minimum bounding axis-aligned rectangle of a point set based on noisy measurements is derived and is applied to the problem of group target and extended object tracking.
Abstract: In this paper, a novel Bayesian estimator for the minimum bounding axis-aligned rectangle of a point set based on noisy measurements is derived. Each given measurement stems from an unknown point and is corrupted with additive Gaussian noise. Extreme value theory is applied in order to derive a linear measurement equation for the problem. The new estimator is applied to the problem of group target and extended object tracking. Instead of estimating each single group member or point feature explicitly, the basic idea is to track a summarizing shape, namely the minimum bounding rectangle, of the group. Simulation results demonstrate the feasibility of the estimator.

Journal ArticleDOI
TL;DR: The paper presents a scheme for computing lower and upper bounds on the posterior marginals in Bayesian networks with discrete variables that uses the cutset conditioning principle to tighten existing bounding schemes and to facilitate anytime behavior, utilizing a fixed number of cutset tuples.
Abstract: The paper presents a scheme for computing lower and upper bounds on the posterior marginals in Bayesian networks with discrete variables. Its power lies in its ability to use any available scheme that bounds the probability of evidence or posterior marginals and enhance its performance in an anytime manner. The scheme uses the cutset conditioning principle to tighten existing bounding schemes and to facilitate anytime behavior, utilizing a fixed number of cutset tuples. The accuracy of the bounds improves as the number of used cutset tuples increases and so does the computation time. We demonstrate empirically the value of our scheme for bounding posterior marginals and probability of evidence using a variant of the bound propagation algorithm as a plug-in scheme.

Journal Article
TL;DR: The treatment of Fack and McKay (2007) is extended to allow the graph of sets and relations to be an arbitrary directed graph and a special case that frequently occurs in bounding tails of distributions is analysed in detail.
Abstract: The method of switchings is a standard tool for enumerative and probabilistic applications in combinatorics. In its simplest form, it analyses a relation between two sets to estimate the ratio of their sizes. In a more complicated setting, there is a family of sets connected by some relations. By bounding properties of the relations, bounds can be inferred on the relative sizes of the sets. In this paper we extend the treatment of Fack and McKay (2007) to allow the graph of sets and relations to be an arbitrary directed graph. A special case that frequently occurs in bounding tails of distributions is analysed in detail.

Book ChapterDOI
16 Jun 2010
TL;DR: An algorithm which is capable of globally solving a well-constrained transcendental system over some sub-domain D, isolating all roots, is presented, which is demonstrated on curve-curve and curve-surface intersection problems.
Abstract: We present an algorithm which is capable of globally solving a well-constrained transcendental system over some sub-domain $D\subset \mathbb R^n$, isolating all roots. Such a system consists of n unknowns and n regular functions, where each may contain non-algebraic (transcendental) functions like sin, $\exp$ or log. Every equation is considered as a hyper-surface in $\mathbb R^n$ and thus a bounding cone of its normal field can be defined over a small enough sub-domain of D. A simple test that checks the mutual configuration of these bounding cones is used that, if satisfied, guarantees at most one zero exists within the given domain. Numerical methods are then used to trace the zero. If the test fails, the domain is subdivided. Every equation is handled as an expression tree, with polynomial functions at the leaves, prescribing the domain. The tree is processed from its leaves, for which simple bounding cones are constructed, to its root, which allows to efficiently build a final bounding cone of the normal field of the whole expression. The algorithm is demonstrated on curve-curve and curve-surface intersection problems.

Proceedings ArticleDOI
10 Feb 2010
TL;DR: A discrete collision detection algorithm to detect self-collisions between deformable objects is presented, this is built up using a Bounding Volume Hierarchy (BVH) and a Feature-based method.
Abstract: A discrete collision detection algorithm to detect self-collisions between deformable objects is presented, this is built up using a Bounding Volume Hierarchy (BVH) and a Feature-based method. The deformations are represented by the features of the mesh, which are withinthe bounding volumes and consequently the updating time for the BVH is reduced. The algorithm compares the minimum bounded geometry, the 1-ring, with the other spheres of the hierarchy in order to cull away Bounding Volumes (BV) that are far apart. The 3D objects utilised are surface-based and are deformed by warping, control points of splines, and a mass-spring model.

Book ChapterDOI
01 Jan 2010
TL;DR: This chapter develops a powerful bounding method for linear multistage stochastic programs with a generalized nonconvex dependence on the random parameters and establishes bounds on the recourse functions as well as compact bounding sets for the optimal decisions.
Abstract: The design and analysis of efficient approximation schemes are of fundamental importance in stochastic programming research Bounding approximations are particularly popular for providing strict error bounds that can be made small by using partitioning techniques In this chapter we develop a powerful bounding method for linear multistage stochastic programs with a generalized nonconvex dependence on the random parameters Thereby, we establish bounds on the recourse functions as well as compact bounding sets for the optimal decisions We further demonstrate that our bounding methods facilitate the reliable solution of important real-life decision problems To this end, we solve a stochastic optimization model for the management of nonmaturing accounts and compare the bounds on maximum profit obtained with different partitioning strategies

Book ChapterDOI
11 Jul 2010
TL;DR: In this paper, the authors used non-uniform information about variables to make a more precise tuning, resulting in a slight improvement on upper bounding the 3-SAT threshold for various models of formulas.
Abstract: We give a new insight into the upper bounding of the 3-SAT threshold by the first moment method. The best criteria developed so far to select the solutions to be counted discriminate among neighboring solutions on the basis of uniform information about each individual free variable. What we mean by uniform information, is information which does not depend on the solution: e.g. the number of positive/negative occurrences of the considered variable. What is new in our approach is that we use non uniform information about variables. Thus we are able to make a more precise tuning, resulting in a slight improvement on upper bounding the 3-SAT threshold for various models of formulas defined by their distributions.

Posted Content
TL;DR: A new insight is given into the upper bounding of the 3-SAT threshold by the first moment method, which uses non uniform information about variables to make a more precise tuning, resulting in a slight improvement onupper bounding the3- SAT threshold for various models of formulas defined by their distributions.
Abstract: We give a new insight into the upper bounding of the 3-SAT threshold by the first moment method. The best criteria developed so far to select the solutions to be counted discriminate among neighboring solutions on the basis of uniform information about each individual free variable. What we mean by uniform information, is information which does not depend on the solution: e.g. the number of positive/negative occurrences of the considered variable. What is new in our approach is that we use non uniform information about variables. Thus we are able to make a more precise tuning, resulting in a slight improvement on upper bounding the 3-SAT threshold for various models of formulas defined by their distributions.

Journal ArticleDOI
TL;DR: The weak frequency interval detection problem is studied in this article, where the problem of finding the smallest interval which includes all possible frequencies of an unknown parameter vector is solved by solving a finite number of eigenvalue problems associated with the vertices of a polyhedron.

Journal Article
TL;DR: A new two-stage collision detection technique was proposed through combining the advantages of bounding volume hierarchy and ray-tracing algorithm, which possesses the better accuracy and efficiency in points collision detection.
Abstract: To detect the collision between objects in virtual scene in real-time,a new two-stage collision detection technique was proposed through combining the advantages of bounding volume hierarchy and ray-tracing algorithm.Non-intersecting bounding volumes were quickly pruned out with bounding volume hierarchy method in pre-processing stage,and the pre-processed results were transferred to subsequent precision detection model.The ray-tracing method was used to search quickly the collision points in one-dimension space and also return such parameters necessary for collision response as the points distance and surface normal vector.In order to avoid the redundant computation,the input data in both pre-processing stage and accurate collision detection stage were stored in the same data structure,i.e.octree.The simulation experiments demonstrate that new two-stage technique possesses the better accuracy and efficiency in points collision detection.

Proceedings ArticleDOI
10 Dec 2010
TL;DR: The results of analysis prove that the simpleness of bounding box is in contradiction with the compactness of wrapped object.
Abstract: Collision detection algorithm based on bounding box is an important algorithm of collision detectionThe principle and effectiveness of axis-aligned bounding boxes(AABB)method,oriented bounding box method(OBB),discrete orientation polytopes (k-DOP)method were discussed in detal The results of analysis prove that the simpleness of bounding box is in contradiction with the compactness of wrapped objectThe main results are useful for designing virtual scene

Book ChapterDOI
15 Mar 2010
TL;DR: Stochastic comparisons of multidimensional Markov processes for which quantitative analysis could be intractable if there is no specific solution form are studied, with an intuitive event based formalism.
Abstract: Quality of performance measure bounds is crucial for an accurate dimensioning of computer network resources. We study stochastic comparisons of multidimensional Markov processes for which quantitative analysis could be intractable if there is no specific solution form. On partially ordered state space, different stochastic orderings can be defined as the strong or the less constrained weak ordering. The goal of the present paper is to compare these two orderings with respect the quality of derived bounds. We propose to study a system similar to a Jackson network except that queues have finite capacity. Different bounding systems are built either in the sense of the strong ordering with hard constraints, or in the sense of the weak ordering with less ones. The proofs of the stochastic comparisons are done using the coupling and the increasing set methods, with an intuitive event based formalism. The qualities of bounding systems are compared regarding to blocking probabilities.

Book ChapterDOI
21 Sep 2010
TL;DR: Comparison of the results by the state-of-the-art methods shows that the proposed model is faster than most of modern approaches, while the results are qualitatively as precise as theirs.
Abstract: In this article, a complete visual hull model is introduced. The proposed model is based on bounding edge representation which is one of the fastest visual hull models. However, the bounding edge model has fundamental drawbacks, which make it inapplicable in some environments. The proposed model produces a refined result which represents a complete triangular mesh surface of the visual hull. Further, comparison of the results by the state-of-the-art methods shows that the proposed model is faster than most of modern approaches, while the results are qualitatively as precise as theirs. Of interest is that proposed model can be computed in parallel distributively over the camera networks, while there is no bandwidth penalty for the network. Consequently, the execution time is decreased by the number of the camera nodes dramatically.

Proceedings ArticleDOI
23 Dec 2010
TL;DR: Volume bounding box decomposition is applied to analyze trajectories of surgeon's wrist and recognize different hand gestures to approximate non-convex sets by a number of bounding boxes.
Abstract: Possibility to apply volume bounding box decomposition to surgeon's hand movements analysis and gesture recognition during laparoscope surgery is explored in this paper. Volume bounding box decomposition allows to approximate non-convex sets by a number of bounding boxes which leads fast and easy way to verify if given point belong to certain set or not. This unique feature is applied to analyze trajectories of surgeon's wrist and recognize different hand gestures.

Posted Content
TL;DR: In this article, a hierarchy of generalized eigenvalue problems associated with some Hankel matrices is used to compute the smallest box that contains the support of a measure on Rn.
Abstract: Given all moments of the marginals of a measure on Rn, one provides (a) explicit bounds on its support and (b), a numerical scheme to compute the smallest box that contains the support. It consists of solving a hierarchy of generalized eigenvalue problems associated with some Hankel matrices.