scispace - formally typeset
Search or ask a question

Showing papers on "Bounding overwatch published in 2013"


Posted Content
TL;DR: In this paper, a bounding argument was proposed to replace the coefficient movement heuristic, which is informative only if selection on observables is proportional to selection on unobservables.
Abstract: A common heuristic for evaluating robustness of results to omitted variable bias is to look at coefficient movements after inclusion of controls. This heuristic is informative only if selection on observables is proportional to selection on unobservables. I formalize this link, drawing on theory in Altonji, Elder and Taber (2005) and show how, with this assumption, coefficient movements, along with movements in R-squared values, can be used to calculate omitted variable bias. I discuss empirical implementation and describe a formal bounding argument to replace the coefficient movement heuristic. I show two validation exercises suggesting that this bounding argument would perform well empirically. I discuss application of this procedure to a large set of publications in economics, and use evidence from randomized studies to draw guidelines as to appropriate bounding values.Institutional subscribers to the NBER working paper series, and residents of developing countries may download this paper without additional charge at www.nber.org.

263 citations


Journal ArticleDOI
TL;DR: A general bounding result is established, based on differential inequalities, which enables the effective use of such sets during the bounding procedure, and is shown to provide significant advantages over alternative methods in terms of both efficiency and accuracy.

95 citations


Journal ArticleDOI
TL;DR: In this paper, a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on uncertain parameters is proposed, where the probability distributions of the uncertain parameters are assumed to belong to a given probability box (also known as a p-box).

42 citations


ReportDOI
TL;DR: In this paper, a bounding argument was proposed to replace the coefficient movement heuristic, which is informative only if selection on observables is proportional to selection on unobservables.
Abstract: A common heuristic for evaluating robustness of results to omitted variable bias is to look at coefficient movements after inclusion of controls. This heuristic is informative only if selection on observables is proportional to selection on unobservables. I formalize this link, drawing on theory in Altonji, Elder and Taber (2005) and show how, with this assumption, coefficient movements, along with movements in R-squared values, can be used to calculate omitted variable bias. I discuss empirical implementation and describe a formal bounding argument to replace the coefficient movement heuristic. I show two validation exercises suggesting that this bounding argument would perform well empirically. I discuss application of this procedure to a large set of publications in economics, and use evidence from randomized studies to draw guidelines as to appropriate bounding values.

39 citations


Journal ArticleDOI
TL;DR: In this article, the authors deal with the accuracy of guaranteed error bounds on outputs of interest computed from approximate methods such as the finite element method and introduce new bounding techniques based on the Saint-Venant's principle.
Abstract: The paper deals with the accuracy of guaranteed error bounds on outputs of interest computed from approximate methods such as the finite element method. A considerable improvement is introduced for linear problems thanks to new bounding techniques based on Saint-Venant's principle. The main breakthrough of these optimized bounding techniques is the use of properties of homothetic domains which enables to cleverly derive guaranteed and accurate boundings of contributions to the global error estimate over a local region of the domain. Performances of these techniques are illustrated through several numerical experiments.

32 citations


Journal ArticleDOI
TL;DR: This work presents a method for the derivation of the derived dynamics, the bounding set of the estimation error, and the state estimate dynamic equations of the constrained Moving Horizon Estimator (MHE).

30 citations


Journal ArticleDOI
TL;DR: By finding linear relations among differences between two special means, this paper established some inequalities for bounding Toader mean in terms of the arithmetic, harmonic, centroidal, and contraharmonic means.
Abstract: By finding linear relations among differences between two special means, the authors establish some inequalities for bounding Toader mean in terms of the arithmetic, harmonic, centroidal, and contraharmonic means.

28 citations


Patent
Martin Stich1
22 May 2013
TL;DR: In this article, a hierarchical hierarchical structure for the bounding volume hierarchical structure is proposed, which includes a parent node and a child node bounding volumes enclosing a plurality of objects.
Abstract: A system and method for constructing a bounding volume hierarchical structure are disclosed. The method includes defining a parent node for the bounding volume hierarchical structure, the parent node including a parent node bounding volume enclosing a plurality of objects. A first cost is computed for performing an object partition of the parent node bounding volume to produce a first plurality of child node bounding volumes, and a second cost is also computed for performing a spatial partitioning of the parent node bounding volume to produce a second plurality of child node bounding volumes. The bounding volume hierarchical structure is constructed employing the second plurality of child node bounding volumes produced from the spatial partitioning of the parent node bounding volume if the second cost is lower than the first cost.

15 citations


Posted Content
TL;DR: This work exploits qualitative probabilistic relationships among variables for computing bounds of conditional probability distributions of interest in Bayesian networks and obtains monotonically tightening bounds that converge to exact distributions.
Abstract: We exploit qualitative probabilistic relationships among variables for computing bounds of conditional probability distributions of interest in Bayesian networks. Using the signs of qualitative relationships, we can implement abstraction operations that are guaranteed to bound the distributions of interest in the desired direction. By evaluating incrementally improved approximate networks, our algorithm obtains monotonically tightening bounds that converge to exact distributions. For supermodular utility functions, the tightening bounds monotonically reduce the set of admissible decision alternatives as well.

14 citations


Proceedings ArticleDOI
17 Jul 2013
TL;DR: Various bounding techniques based on interval arithmetic, Taylor model arithmetic and ellipsoidal calculus are compared to reduce the number of iterations significantly compared to interval analysis, yet the overall computational time is only reduced for tight approximation levels due to the computational overhead.
Abstract: This paper is concerned with guaranteed parameter estimation in nonlinear dynamic systems in a context of bounded measurement error. The problem consists of finding-or approximating as closely as possible-the set of all possible parameter values such that the predicted outputs match the corresponding measurements within prescribed error bounds. An exhaustive search procedure is applied, whereby the parameter set is successively partitioned into smaller boxes and exclusion tests are performed to eliminate some of these boxes, until a prespecified threshold on the approximation level is met. Exclusion tests rely on the ability to bound the solution set of the dynamic system for a given parameter subset and the tightness of these bounds is therefore paramount. Equally important is the time required to compute the bounds, thereby defining a trade-off. It is the objective of this paper to investigate this trade-off by comparing various bounding techniques based on interval arithmetic, Taylor model arithmetic and ellipsoidal calculus. When applied to a simple case study, ellipsoidal and Taylor model approaches are found to reduce the number of iterations significantly compared to interval analysis, yet the overall computational time is only reduced for tight approximation levels due to the computational overhead.

12 citations


Proceedings ArticleDOI
29 Sep 2013
TL;DR: This paper argues that a real-world setting does not include a reference document and that an information retrieval step is often required in order to locate documents containing candidate beginning and end times and calls this task "Information Retrieval for Temporal Bounding".
Abstract: The temporal bounding problem is that of finding the beginning and ending times of a temporal interval during which an assertion holds. Existing approaches to temporal bounding have assumed the provision of a reference document from which to extract temporal bounds. We argue that a real-world setting does not include a reference document and that an information retrieval step is often required in order to locate documents containing candidate beginning and end times. We call this task "Information Retrieval for Temporal Bounding". This paper defines the task and discusses suitable evaluation metrics, as well as demonstrating the task's difficulty using a reference dataset.

Posted Content
TL;DR: In this paper, a bounding argument was proposed to replace the coefficient movement heuristic, which is informative only if selection on observables is proportional to selection on unobservables.
Abstract: A common heuristic for evaluating robustness of results to omitted variable bias is to look at coefficient movements after inclusion of controls This heuristic is informative only if selection on observables is proportional to selection on unobservables I formalize this link, drawing on theory in Altonji, Elder and Taber (2005) and show how, with this assumption, coefficient movements, along with movements in R-squared values, can be used to calculate omitted variable bias I discuss empirical implementation and describe a formal bounding argument to replace the coefficient movement heuristic I show two validation exercises suggesting that this bounding argument would perform well empirically I discuss application of this procedure to a large set of publications in economics, and use evidence from randomized studies to draw guidelines as to appropriate bounding values

Patent
14 Aug 2013
TL;DR: In this paper, a multi-pass tiling test is proposed to identify a first cache tile associated with a render surface and determine that the coarse bounding box intersects the first cache tiles.
Abstract: One embodiment of the present invention includes a method for performing a multi-pass tiling test. The method includes combining a plurality of bounding boxes to generate a coarse bounding box. The method further includes identifying a first cache tile associated with a render surface and determining that the coarse bounding box intersects the first cache tile. The method further includes comparing each bounding box included in the plurality of bounding boxes against the first cache tile to determine that a first set of one or more bounding boxes included in the plurality of bounding boxes intersects the first cache tile. Finally, the method includes, for each bounding box included in the first set of one or more bounding boxes, processing one or more graphics primitives associated with the bounding box. One advantage of the disclosed technique is that the number of intersection calculations performed for each cache tile is reduced.

Book ChapterDOI
26 Jun 2013
TL;DR: This paper deduces from that an upper bound to the length of linear head reduction for arbitrary simply-typed λ-terms is given, and proves the asymptotical optimality of the upper bounds by providing matching lower bounds.
Abstract: Bounding skeletons were recently introduced as a tool to study the length of interactions in Hyland/Ong game semantics. In this paper, we investigate the precise connection between them and execution of typed λ-terms. Our analysis sheds light on a new condition on λ-terms, called local scope. We show that the reduction of locally scoped terms matches closely that of bounding skeletons. Exploiting this connection, we give upper bound to the length of linear head reduction for simply-typed locally scoped terms. General terms lose this connection to bounding skeletons. To compensate for that, we show that λ-lifting allows us to transform any λ-term into a locally scoped one. We deduce from that an upper bound to the length of linear head reduction for arbitrary simply-typed λ-terms. In both cases, we prove the asymptotical optimality of the upper bounds by providing matching lower bounds.


Patent
23 Apr 2013
TL;DR: In this paper, techniques for performing vector-based flood-fill operations on vector artwork are described, and a new bounding shape is created around a new area resulting from the planarizing and that includes the POI.
Abstract: Techniques are disclosed for performing flood-fill operations on vector artwork. In one embodiment, a region under a point of interest (POI) of vector artwork is rasterized and flood-filled, and an initial bounding shape around that area is used as a first guess as to the area to be filled. In other cases, the initial bounding shape is created around some initial area that includes the POI (no rasterization). In any such case, vector objects having bounding shapes that intersect the initial bounding shape are identified and fed into a planar map. After map planarization, a new bounding shape is created around a new area resulting from the planarizing and that includes the POI. In response to that bounding shape not extending beyond the initial bounding shape, a vector-based flood-fill operation can be performed on that new area. The process repeats if a new bounding shape extends beyond previous bounding shape.

Journal ArticleDOI
TL;DR: It is demonstrated that a composite algorithm that strategically alternates between the two samplers' updates can be substantially faster than either individually, and theoretically bound the expected time until coalescence for the composite algorithm.
Abstract: A discrete data augmentation scheme together with two different parameterizations yields two Gibbs samplers for sampling from the posterior distribution of the hyperparameters of the Dirichlet-multinomial hierarchical model under a default prior distribution. The finite-state space nature of this data augmentation permits us to construct two perfect samplers using bounding chains that take advantage of monotonicity and anti-monotonicity in the target posterior distribution, but both are impractically slow. We demonstrate that a composite algorithm that strategically alternates between the two samplers' updates can be substantially faster than either individually. The speed gains come because the composite algorithm takes a divide-and-conquer approach in which one update quickly shrinks the bounding set for the augmented data, and the other update immediately coalesces on the parameter, once the augmented-data bounding set is a singleton. We theoretically bound the expected time until coalescence for the composite algorithm, and show via simulation that the theoretical bounds can be close to actual performance. Copyright 2013, Oxford University Press.

Journal ArticleDOI
TL;DR: This work proposes a memory-efficient implementation of an extension of the Gobien-Dotson bounding algorithm that allows us to use low-bandwidth high-capacity storage without increasing runtime.

Proceedings ArticleDOI
16 Apr 2013
TL;DR: An approach for automatically devising object annotations in images by using a discriminative color model and an iterative algorithm which trains a SVM model based on bag-of-visual-words histograms.
Abstract: We present an approach for automatically devising object annotations in images. Thus, given a set of images which are known to contain a common object, our goal is to find a bounding box for each image which tightly encloses the object. In contrast to regular object detection, we do not assume any previous manual annotations except for binary global image labels. We first use a discriminative color model for initializing our algorithm by very coarse bounding box estimations. We then narrow down these boxes using visual words computed from HOG features. Finally, we apply an iterative algorithm which trains a SVM model based on bag-of-visual-words histograms. During each iteration, the model is used to find better bounding boxes which can be done efficiently by branch and bound. The new bounding boxes are then used to retrain the model. We evaluate our approach for several different classes of publicly available datasets and show that we obtain promising results.

Book ChapterDOI
16 Sep 2013
TL;DR: To tackle truncation errors, this work investigates the bounding semantics of continuous stochastic logic for Markov chains for nested CSL formulas and proposes new algorithms to generate lower and upper bounds.
Abstract: Model checking aims to give exact answers to queries about a model’s execution but, in probabilistic model checking, ensuring exact answers might be difficult. Numerical iterative methods are heavily used in probabilistic model checking and errors caused by truncation may affect correctness. To tackle truncation errors, we investigate the bounding semantics of continuous stochastic logic for Markov chains. We first focus on analyzing truncation errors for model-checking the time-bounded or unbounded Until operator and propose new algorithms to generate lower and upper bounds. Then, we study the bounding semantics for a subset of nested CSL formulas. We demonstrate result on two models.

Journal ArticleDOI
TL;DR: The proposed statistical and bounding methods approximate model behavior, and can help analysts focus Integrated Planning Model (IPM) runs on input assumptions whose results are potentially interesting but uncertain, to reduce computation time and enable incorporation of such models into multimodel decision support systems.
Abstract: We present a general methodology for approximating the input–output behavior of complex energy market models Outputs, such as costs, prices, and emissions, depend on policy, economic, and technological assumptions The proposed statistical and bounding methods approximate model behavior, and can help analysts focus Integrated Planning Model (IPM) runs on input assumptions whose results are potentially interesting but uncertain The statistical methods (Multivariate adaptive regression splines) use past run data to make predictions of IPM outputs given new input data sets Those methods are illustrated using results from the IPM, a large-scale linear program that is widely used by the US Environmental Protection Agency and industry to simulate the behavior of the US power market Meanwhile, bounding methods use mathematical properties of linear programs, in addition to past run data, to bound outputs for new inputs These methods can be used to approximate the outputs of any convex optimization model of power systems to reduce computation time and enable incorporation of such models into multimodel decision support systems The bounding approach is demonstrated by an application to the COMPETES model of the northwest European power market

Proceedings ArticleDOI
01 Nov 2013
TL;DR: This work analyzes the performance of a network under general traffics derived from traces and proves some stochastic monotonicity properties for the network elements, in order to derive bounds on the performance measures such as delays and losses.
Abstract: We analyze the performance of a network under general traffics derived from traces. We apply stochastic comparisons in order to derive bounding histograms with a reduced size and complexity. We prove some stochastic monotonicity properties for the network elements, in order to derive bounds on the performance measures such as delays and losses. We show clearly that this approach provides an attractive solution as a trade-off between accuracy of the results and computation times. Moreover, we compare our results with an approximative method previously published, in order to show the accuracy of the bounds, and to highlight the benefits of our approach for network dimensioning.

Journal ArticleDOI
TL;DR: A subdivision method of computing the roots of a univariate polynomial equation is proposed using novel bounding methods, which provides improved robustness and performance compared to the existing convex hull-based method, e.g., the Projected Polyhedron algorithm.

Patent
14 Aug 2013
TL;DR: In this article, a method for generating accumulated bounding boxes for graphics primitives is presented, which includes generating a first bounding box associated with a first graphics primitive and then transmitting the first boundings box to a tiling unit via a crossbar.
Abstract: One embodiment of the present invention includes a method for generating accumulated bounding boxes for graphics primitives. The method includes generating a first bounding box associated with a first graphics primitive. The method further includes, for each graphics primitive included in a first set of one or more additional graphics primitives, determining that the graphics primitive is within a threshold distance of the first bounding box, and adding the graphics primitive to the first bounding box. The method further includes determining not to add a second graphics primitive to the first bounding box. The method further includes generating a second bounding box associated with the second graphics primitive. Finally, the method includes transmitting the first bounding box to a tiling unit via a crossbar. One advantage of the disclosed embodiments is that multiple bounding boxes are combined to generate an accumulated bounding box that is then transferred across the crossbar.

Proceedings ArticleDOI
19 Aug 2013
TL;DR: A new method is introduced to upper bound integrity risk for sequential state estimators when the autocorrelation functions of measurement noise and disturbance inputs are subject to bounded uncertainties.
Abstract: A new method is introduced to upper bound integrity risk for sequential state estimators when the autocorrelation functions of measurement noise and disturbance inputs are subject to bounded uncertainties. Integrity risk is defined as the probability of the state estimate error exceeding predefined bounds of acceptability. In the first part of the paper, a new expression is derived that relates the measurement noise and disturbance input autocorrelation functions to the state estimate error vector. Using this relation, an efficient algorithm is developed in the second part of the paper to upper bound the estimation integrity risk when each input autocorrelation function is known to lie between upper and lower bounding functions. Numerical simulations for a one-dimensional position and velocity estimation problem are conducted to demonstrate the practical feasibility and effectiveness of this new bounding method.

Journal Article
TL;DR: Taxonomic description of polygon object features, which are composed of single polygon, multi-polygon and multiline polygon with numerous node data, is presented and a new category of fundamental operation called "collinear" is proposed.
Abstract: Overlay of polygon objects is a spatial overlay analysis using divergent data layers and their corresponding attributes within specific region of interestIt can quantitatively investigate the scope and feature of interaction and function among different types of spatial objectsHowever,traditional single processor,single thread overlay computing model is hard to fulfill the demand of real time analysis and emergency decision upon massive and dynamic spatial dataTowards simple data model,this paper presents taxonomic description of polygon object features,which are composed of single polygon,multi-polygon and multiline polygon with numerous node dataData filtering and parallel computational strategies using multi-level bounding boxes are further establishedMeanwhile,besides the two traditional fundamental operations,ie"point inclusion"and"line traversing(line intersection)",a new category of fundamental operation called"collinear"is proposedThereby,we can quickly calculate the relative position of the intersection points and the nodes on a collinear edge when overlaying two polygons,and precisely judge whether the collinear edge is the boundary of their"intersection region"As the initial outcome,the efficacy of methodology proposed here has been proved under the circumstance of serial computation

Posted Content
TL;DR: This work intends to introduce explicit formulas for bounding circles in the plane, and some generalizations to space, thereby providing readily applicable bounding sets for IFS fractals.
Abstract: The attractors of Iterated Function Systems in Euclidean space - IFS fractals - have been the subject of great interest for their ability to visually model a wide range of natural phenomena. Indeed computer-generated plants are often modeled using 3D IFS fractals, and thus their extent in virtual space is a fundamental question, whether for collision detection or ray tracing. A great variety of algorithms exist in the literature for finding bounding circles, polygons, or rectangles for these sets, usually tackling the easier question in 2D first, as a basis for the 3D bounding problem. The existing algorithms for finding bounding circles are mostly approximative, with significant computational and methodological complexity. We intend to hereby introduce explicit formulas for bounding circles in the plane, and some generalizations to space, thereby providing readily applicable bounding sets for IFS fractals.

Patent
30 Oct 2013
TL;DR: A curve surface measurement system and method comprises structuring a maximum bounding box of a triangularly-meshed curve surface and associating every sub-bounding box with triangles intersected with the sub bounding boxes as mentioned in this paper.
Abstract: A curve surface measurement system and method comprises structuring a maximum bounding box of a triangularly-meshed curve surface, dividing the maximum bounding box into sub bounding boxes and associating every sub bounding box with triangles intersected with the sub bounding box; when a measurement point has a search direction vector, taking the measurement point as a start and the search direction vector as the direction to structure a half-line; obtaining the sub bounding boxes intersected with the half-line according to an intersection sequence, sequentially obtaining the triangles associated with the sub bounding boxes and recording the coordinates of the intersection points of the half-line and the triangles and the normal vectors of the triangles; when the measurement point does not have a search direction vector, taking the measurement point as a center to structure a cubic area, obtaining the sub bounding boxes intersected with the cubic area and the triangles associated with every sub bounding box, calculating the distance between the measurement point and the median point of every triangle, selecting out the shortest distance and recording the median point and the normal vector of the triangle corresponding to the shortest distance. The curve surface measurement system and method can achieve the automatic collection of measurement points.

Journal ArticleDOI
TL;DR: On-the-fly incremental liveness verification method based on Bounding Depth Sub Buchi Automata (BDSBAs), which is an improvement of the Gaiser model checking algorithm, outperforms the original one.
Abstract: In this paper we propose an on-the-fly incremental liveness verification method based on Bounding Depth Sub Buchi Automata (BDSBAs), which is an improvement of the Gaiser model checking algorithm [1]. Given a bounding depth, at first the method on-the-fly computes a bounding depth state set as the key factor of a BDSBA by a bounding depth DFS procedure, and then makes nested depth-first search as Gaiser algorithm explore an accepting lasso of the BDSBA as a counterexample of the liveness property under verification. With the increment of the bounding depth, our method can incrementally verify a liveness property. The empirical results indicate that our new method outperforms the original one.

Proceedings ArticleDOI
07 Jul 2013
TL;DR: The “one-shot” lower bounding model is extended to many-user scenarios, and a two-step update of the one-shot models to incorporate the broadcast nature of wireless transmission is proposed.
Abstract: Motivated by the framework of network equivalence theory [1], [2], we present capacity lower bounding models for wireless networks by construction of noiseless networks which can be used to calculate an inner bound for the corresponding wireless network. We first extend the “one-shot” lower bounding model [6] to many-user scenarios, and then propose a two-step update of the one-shot models to incorporate the broadcast nature of wireless transmission. The main advantage of the proposed lower bounding method is its simplicity and the fact that it can be easily extended to larger networks. We demonstrate by examples that the resulting lower bounds can even approach the capacity in some setups.