scispace - formally typeset
Search or ask a question

Showing papers on "Bounding overwatch published in 2005"


Proceedings ArticleDOI
12 Dec 2005
TL;DR: In this paper, the estimated domains are represented by zonotopes, a particular polytope defined as the linear image of a unit interval vector (i.e. unit hypercube).
Abstract: A state bounding observer aims at computing some domains which are guaranteed to contain the set of states that are consistent both with the uncertain model and with the uncertain measurements. In this paper, the estimated domains are represented by zonotopes. A zonotope is a particular polytope defined as the linear image of a unit interval vector (i.e. unit hypercube). Some results about the validated integration of ordinary differential equations are used to guarantee the inclusion of sampling errors. The main loop of the observation algorithm consists of a one step prediction with a limitation of the domain complexity and a correction using the measurements. The observer is applied to a Lotka-Volterra predator-prey model.

137 citations


Journal ArticleDOI
TL;DR: It is shown how various stability assumptions can be employed for bounding the bias and variance of estimators of the expected error, and an extension of the bounded-difference inequality for "almost always" stable algorithms is proved.
Abstract: The problem of proving generalization bounds for the performance of learning algorithms can be formulated as a problem of bounding the bias and variance of estimators of the expected error. We show how various stability assumptions can be employed for this purpose. We provide a necessary and sufficient stability condition for bounding the bias and variance for the Empirical Risk Minimization algorithm, and various sufficient conditions for bounding bias and variance of estimators for general algorithms. We discuss settings in which it is possible to obtain exponential bounds, and we prove an extension of the bounded-difference inequality for "almost always" stable algorithms.

82 citations


Journal ArticleDOI
TL;DR: The Nonlinear Set Membership (NSM) method, recently proposed by the authors, is taken, assuming that the nonlinear regression function, representing the difference between the system to be identified and a linear approximation, has gradient norm bounded by a constant /spl gamma/.
Abstract: In this note, the problem of the quality of identified models of nonlinear systems, measured by the errors in simulating the system behavior for future inputs, is investigated. Models identified by classical methods minimizing the prediction error, do not necessary give "small" simulation error on future inputs and even boundedness of this error is not guaranteed. In order to investigate the simulation error boundedness (SEB) property of identified models, a Nonlinear Set Membership (NSM) method recently proposed by the authors is taken, assuming that the nonlinear regression function, representing the difference between the system to be identified and a linear approximation, has gradient norm bounded by a constant /spl gamma/. Moreover, the noise sequence is assumed unknown but bounded by a constant /spl epsiv/. The NSM method allows to obtain validation conditions, useful to derive "validated regions" within which to suitably choose the bounding constants /spl gamma/ and /spl epsiv/. Moreover, the method allows to derive an "optimal" estimate of the true system. If the chosen linear approximation is asymptotically stable (a necessary condition for the SEB property), in the present note a sufficient condition on /spl gamma/ is derived, guaranteeing that the identified optimal NSM model has the SEB property. If values of /spl gamma/ in the validated region exist, satisfying the sufficient condition, the previous results can be used to give guidelines for choosing the bounding constants /spl gamma/ and /spl epsiv/, additional to the ones required for assumptions validation and useful for obtaining models with "low" simulation errors. The numerical example, representing a mass-spring-damper system with nonlinear damper and input saturation, demonstrates the effectiveness of the presented approach.

28 citations


Journal ArticleDOI
TL;DR: This paper presents a roadmap for a few strategies that provide optimal or near-optimal (time-wise) solutions to the problem of computing the minimum-angle bounding cone of a set of three-dimensional vectors, which are also simple to implement.

26 citations


Journal ArticleDOI
TL;DR: It is shown that the problem of minimizing the sum of arbitrary-norm real distances to misclassified points, from a pair of parallel bounding planes of a classification problem, leads to a simple parameterless linear program.
Abstract: We show that the problem of minimizing the sum of arbitrary-norm real distances to misclassified points, from a pair of parallel bounding planes of a classification problem, divided by the margin (distance) between the two bounding planes, leads to a simple parameterless linear program. This constitutes a linear support vector machine (SVM) that simultaneously minimizes empirical error of misclassified points while maximizing the margin between the bounding planes. Nonlinear kernel SVMs can be similarly represented by a parameterless linear program in a typically higher dimensional feature space. Data Mining Institute Technical Report 03-01, March 2003. Revised March 2004.

24 citations


Proceedings ArticleDOI
12 May 2005
TL;DR: The results show that this method is efficient and works well even for large objects, and compared GI-COLLIDE with the most important collision detection techniques.
Abstract: A new collision detection algorithm GI-COLLIDE is presented. The algorithm works with geometry images and codes bounding spheres as a perfectly balanced mip-map-like hierarchical data structure. The largest geometry image in the hierarchy stores the bounding spheres of the quadruples of the vertices from the input geometry image. Center of the sphere is calculated as the center of the corresponding min-max AABB and stored as the pixel's RGB value. The alpha value stores the sphere radius. In this way each level represents bounding spheres of the previous level. The upmost level of the hierarchy is the bounding sphere of the entire object. Collisions between objects are detected by the standard tree traversing and checking the overlaps among the bounding spheres. The exact collision is detected by a triangle-triangle test at the lowest level. The bounding sphere coding is implicit. A sphere at any level can be found efficiently only by indexing the geometry image. Once objects are represented as geometry images the collision detection can be performed efficiently using directly this representation and it is not necessary to use any other. The results show that this method is efficient and works well even for large objects. We have compared GI-COLLIDE with the most important collision detection techniques.

18 citations


01 Jan 2005
TL;DR: This paper addresses the problem of intersection detection between pairs of bounding boxes in three-dimensional space and tries to address the question how close can the authors get to an ideal algorithm having O (n+ k) running time and O space demands.
Abstract: In this paper we address the problem of intersection detection between pairs of bounding boxes in three-dimensional space. Our motivation for investigating this problem comes from the need for an efficient collision detection framework applicable in the context of large scale contact analysis. Not considering underlying physics of the mechanical contact problem we focus on purely geometrical setting. Moreover we simplify the geometrical entities further by assuming that the objects we are dealing with are axis aligned bounding boxes undergoing arbitrary combination of a simultaneous rigid move, shrinking and expanding along specific directions. This simplification, as it could seem restrictive, allows us not to relay on object representation details and at the same time gain an insight into combinatorial nature of the problem. This in turn helps to realise the place of the collision detection pipeline among other computational tasks in contact mechanics. Assuming we are dealing with n boxes, with k intersecting pairs among them, we try to address the question how close can we get to an ideal algorithm having O (n+ k) running time and O (n) space demands.

13 citations


DissertationDOI
01 Jan 2005
TL;DR: A new hierarchical spatial access method called the nDR-tree is proposed that preserves all spatial relationships between all objects in n-dimensional space and shows a polynomial worst-case or better running time for all algorithms.
Abstract: A new hierarchical spatial access method called the nDR-tree is proposed that preserves all spatial relationships between all objects in n-dimensional space. The nDR-tree fits the existing data by using nodes that are the same dimensionality. The two-dimensional version, the 2DR-tree, is presented. The 2DR-tree uses two-dimensional nodes to index two-dimensional data. The minimum bounding rectangles in each node are organized according to a “validity rule” that preserves spatial relationships. This provides support for both binary searching that takes advantage of spatial relationships, and greedy searching that reduces the number of minimum bounding rectangles within a node that must be tested. The insertion and deletion strategies both use a binary partition of a node to insert an object or update a non-leaf minimum bounding rectangle. A validity test ensures that each node involved in an insertion or deletion preserves the spatial relationships among its objects. Any node invalidity is handled by employing different splitting strategies. The binary search strategy performs a recursive binary partition of a node to find minimum bounding rectangles that overlap a search region. Both an analysis and a performance evaluation are presented. The analysis shows a polynomial worst-case or better running time for all algorithms. Experimental results for insertion show that the 2DR-tree is ideal for larger objects sets with respect to tree height. The average number of disk accesses and splits per insert are reasonable. In addition, it is ideal for a dynamic skewed data set, which achieves lower coverage, overcoverage, and overlap than a dynamic, uniformly distributed data set. Experimental results for binary search show that the 2DR-tree is ideal for executing region queries where the search region is between 5–10% of the search space. In addition, region search performance improves with the number of objects in the tree.

11 citations


Proceedings ArticleDOI
12 Dec 2005
TL;DR: In this paper, an a posteriori bound on the error in the estimated parameter is constructed that is structurally dependent on the particular data sequence, which is applicable to the situation of approximate modelling (S ∉ M) and to model structures that are nonlinear in the parameters.
Abstract: In prediction error identification model uncertainty bounds are generally derived from the statistical properties of the parameter estimator. These statistical properties reflect the variability in the estimated model under repetition of experiments with different realizations of the measured signals. However when the primal interest of the identification is in quantifying the uncertainty in an estimated parameter on the basis of one single experiment, this is not necessarily the best and only approach. In the alternative paradigm that is presented here, not the covariance of the estimator will be used for bounding the model uncertainty, but an a posteriori bound on the error in the estimated parameter will be constructed that is structurally dependent on the particular data sequence. This will allow simpler computations for probabilistic model uncertainty bounds also applicable to the situation of approximate modelling (S ∉ M) and to model structures that are nonlinear in the parameters, such as Output Error (OE) models.

10 citations


Proceedings Article
01 Jan 2005
TL;DR: Traditional sensitivity assessment methods have limitations which motivate a new approach, the subject of a new project at ANU and the Universities of Adelaide and Melbourne, with the Murray-Darling Basin Commission and the South Australia Dept. of Water, Land and Biodiversity Conservation as partners.
Abstract: Traditional sensitivity assessment (SA) methods have limitations which motivate a new approach, the subject of a new project at ANU and the Universities of Adelaide and Melbourne, with the Murray-Darling Basin Commission and the South Australia Dept. of Water, Land and Biodiversity Conservation as partners. The limitations include high computing load, restricted scope and validity of the results, excessive volume of results and failure to distinguish SA from uncertainty assessment. The new approach has three main aims: (i) to investigate sensitivity of a wide range of model outcomes, not only the values of individual output variables; (ii) to examine sensitivity to changes which are not small; (iii) to find efficiently features such as critical or nearredundant parameter combinations. Requirements such as output ranges, credible behaviour or given rank order of scenario outcomes define an acceptable outcome set. SA then explores the feasible set of parameter values producing acceptable outcomes. This inverts the mapping by the model from parameters to outcomes.

7 citations


Journal ArticleDOI
TL;DR: In this paper, a new inequality concerning generalized characters of $p$ -groups is obtained and applications to bounding the number of irreducible characters in blocks of finite groups are given.
Abstract: A new inequality concerning generalized characters of $p$ -groups is obtained and applications to bounding the number of irreducible characters in blocks of finite groups are given.

Book ChapterDOI
01 Jan 2005
TL;DR: This work has simplified the tests themselves by arranging the bounding volumes into a tree hierarchy called a bounding volume hierarchy (BVH), so that the time complexity can be reduced to logarithmic in the number of tests performed.
Abstract: Wrapping objects in bounding volumes and performing tests on the bounding volumes before testing the object geometry itself can result in significant performance improvements. However, although the tests themselves have been simplified, the same number of pairwise tests are still being performed. The asymptotic time complexity remains the same and the use of bounding volumes improves the situation by a constant factor. By arranging the bounding volumes into a tree hierarchy called a bounding volume hierarchy (BVH), the time complexity can be reduced to logarithmic in the number of tests performed.

Proceedings ArticleDOI
12 Dec 2005
TL;DR: New methods for adaptively bounding approximation accuracy with methods that involve localized forgetting are developed, which have utility for self-organizing approximators that could adjust the number of basis elements N by adding additional approximation resources in the regions where the approximation error bound is large.
Abstract: This article develops new methods for adaptively bounding approximation accuracy with methods that involve localized forgetting. The existing results use global forgetting. The importance of local versus global forgetting is motivated in the text. Such bounds have utility for self-organizing approximators that could adjust the number of basis elements N by adding additional approximation resources in the regions where the approximation error bound is large.

01 Jan 2005
TL;DR: This work establishes four specific demands that good-quality crowd simulation should satisfy, which are scalability, controllability, efficiency and convincingness, and proposes a novel two-level crowd simulation framework that satisfies these demands at the same time.
Abstract: Crowd simulation is a difficult task---not only because we require extra computational time for simulating a lot of characters, but also because the crowd behaviors are highly complex and it is hard for them to maintain convincingness. We establish four specific demands that good-quality crowd simulation should satisfy, which are scalability, controllability, efficiency and convincingness, and propose a novel two-level crowd simulation framework that satisfies these demands at the same time. At the high-level, we adopt a distributed crowd control mechanism called a situation in which environmental-specific or social relationship-specific information is automatically added to characters when they are in the situations. At the low-level, and in relation to the applications, the probability scheme or the constrained motion synthesis is called to synthesize motions for each individual character. The probability scheme, which computes the probabilities of all available actions and selects the next action through random sampling, facilitates simulation of the short-term aggregate behaviors. On the other hand, the constrained motion synthesis, which synthesizes motions that meet constraints, is useful when the behaviors need long-term planning. In addition, we address the issue of fast collision detection between motions and, therein, propose the MOBB (Motion-Oriented Bounding Box) tree representation of motions, which simplifies a motion into a simple bounding box in a spatio-temporal domain. Given the two bounding MOBB trees, the intersection between two bounding boxes is tested hierarchically from the top node to the leaf node, and this outcome minimizes the number of tests, which makes collision test fast. To validate the satisfaction of the four demands, we perform a series of experiments.

Proceedings Article
15 Sep 2005
TL;DR: Some examples of the performance of the bounders for unconstrained global optimization problems are given, beginning with various common toy problems of the community, and also including a rather challenging Lennard-Jones problem.
Abstract: Taylor models provide enclosures of functional dependencies by a polynomial and an interval remainder bound that scales with a high power of the domain width, allowing a far-reaching suppression of the dependency problem. For the application to range bounding, one observes that the resulting polynomials are more well-behaved than the original function; in fact, merely naively evaluating them in interval arithmetic leads to a quadratic range bounder that is frequently noticeably superior to other second order methods. However, the particular polynomial form allows the use of other techniques. We review the linear dominated bounder (LDB) and the quadratic fast bounder (QFB). LDB often allows an exact bounding of the polynomial part if the function is monotonic. If it does not succeed to provide an optimal bound, it still often provides a reduction of the domain simultaneously in all variables. Near interior minimizers, where the quadratic part of the local Taylor model is positive semidefinite, QFB minimizes the quadratic contribution to the lower bound of the function, avoiding the infamous cluster effect for validated global optimization tasks. Some examples of the performance of the bounders for unconstrained global optimization problems are given, beginning with various common toy problems of the community, and also including a rather challenging Lennard-Jones problem.


Proceedings ArticleDOI
15 Dec 2005
TL;DR: Comparing the performance of an idealized genetic algorithm that uses a fitness function based on the generalization error with that of an empirical genetic algorithm based on Rademacher penalization indicates that the empirical algorithm does almost as well as the idealized algorithm would.
Abstract: We propose an abstract self bounding genetic algorithm that can be applied to various problems of machine learning. The bound on the generalization error that is output by our algorithm is based on Rademacher penalization, a data driven penalization technique. We prove probabilistic oracle inequalities for the theoretical risk of the estimators based on this approach. This is done by comparing the performance of an idealized genetic algorithm that uses a fitness function based on the generalization error with that of an empirical genetic algorithm based on Rademacher penalization. The inequalities indicate that although we are not able to implement the idealized algorithm (because of the inability to compute the generalization error), the empirical algorithm does almost as well as the idealized algorithm would.

01 Jun 2005
TL;DR: A hierarchy of oriented rounded bounding volume for fast proximity queries is presented and a number of benchmarks are measured to measure the performance of the new bounding box and compare to that of other bounding volumes.
Abstract: A collision query determines the intersection between given objects, and is used in computer-aided design and manu- facturing, animation and simulation systems, and physically-based modeling. Bounding volume hierarchies are one of the simplest and most widely used data structures for performing collision detection on complex models. In this paper, we present hierarchy of oriented rounded bounding volume for fast proximity queries. Designing hierarchies of new bounding volumes, we use to combine multiple bounding volume types in a single hierarchy. The new bounding volume corresponds to geometric shape composed of a core primitive shape grown outward by some offset such as the Minkowski sum of rectangular box and a sphere shape. In the experiment of parallel close proximity, a number of benchmarks to measure the performance of the new bounding box and compare to that of other bounding volumes.