scispace - formally typeset
Search or ask a question

Showing papers on "Bounding overwatch published in 2002"


Posted Content
TL;DR: This work provides a summary and some new results concerning bounds among some important probability metrics/distances that are used by statisticians and probabilists and examples that show that rates of convergence can strongly depend on the metric chosen.
Abstract: When studying convergence of measures, an important issue is the choice of probability metric. In this review, we provide a summary and some new results concerning bounds among ten important probability metrics/distances that are used by statisticians and probabilists. We focus on these metrics because they are either well-known, commonly used, or admit practical bounding techniques. We summarize these relationships in a handy reference diagram, and also give examples to show how rates of convergence can depend on the metric chosen.

195 citations


Proceedings ArticleDOI
Stuart K. Card1, Nation David A1
22 May 2002
TL;DR: This paper proposes Degree-of-Interest trees, an instance of an emerging "attention-reactive" user interface whose components are designed to snap together in bounded spaces.
Abstract: This paper proposes Degree-of-Interest trees. These trees use degree-of-interest calculations and focus+context visualization methods, together with bounding constraints, to fit within pre-established bounds. The method is an instance of an emerging "attention-reactive" user interface whose components are designed to snap together in bounded spaces.

138 citations


Patent
Steven J. Simske1
09 Jul 2002
TL;DR: In this article, a region bounding and classifying system utilizes memory and logic to identify a plurality of regions of different data types within the image and to bound each of the plurality of identified regions via a bounding region.
Abstract: A region bounding and classifying system utilizes memory and logic. The memory stores a set of image data that defines a graphical image. The logic is configured to identify a plurality of regions of different data types within the image and to bound each of the plurality of identified regions via a bounding region. The logic is configured to perform a prioritization of the data types included in the bounding region according to a predefined hierarchy of the data types. The logic is further configured to classify the bounding region based on the prioritization performed by the logic.

40 citations


01 Jan 2002
TL;DR: This work investigates how to characterize the set L(A) of profiles turned into losers by a given profile set A, which becomes determinable on the basis of simple harmonic bounding via bounding minima alone, simplifying aspects of its computation.
Abstract: Evaluation in OT adjudicates competitions between linguistic structures, but it calculates only with the array of violations that the structures incur under each constraint: their violation profiles. Structures with the same profile are indistinguishable, and differences in structure only register to the extent that they correlate with differences in violation. General properties that govern relations between profiles in the space of all possible profiles will thus be inherited by any specific candidate set, even though actual structures may be distributed sparsely or asymmetrically in violation space. Shifting the focus of inquiry from candidates in the space of linguistic forms to violation profiles in violation spaces will provide tools useful in analysis, computation, and learning. Of particular significance are the principles determining which profiles can never be optimal under any ranking, given that certain other profiles are known to be realized in the candidate set. (These neveroptimal profiles we will call losers; by winners we mean the complement set of profiles optimal under some ranking.) The value of knowing the loser-vs.-winner status of a structure is manifest in many applications, especially (at the risk of paradox!) prior to conducting a specific competition. Consider the procedures involved in constructing candidates to test for optimality: given the mere existence of a candidate with a certain profile, knowledge of what it excludes under any ranking will render unnecessary the labor of constructing and evaluating candidates that are always defeated by it. Identifying loser profiles will eliminate improper learning targets and help determine what abstract structure ought be assigned to the observables; for example, we can avoid formally possible but perpetually suboptimal foot-parses for observed sequences of stressed and unstressed syllables, pruning subversive hypotheses (cf. Tesar 2000). Excluding losers is also essential to the analyst, who must know whether the set of competitors under consideration — inevitably finite — mistakenly omits some potentially optimal structures. A precise characterization of the regions of profile space defeated by the identified competitors will address this danger. In Samek-Lodovici & Prince (1999) we show that every loser is harmonically bounded by some non-empty set of candidates. Here we shift perspective and investigate how to characterize the set L(A) of profiles turned into losers by a given profile set A. A priori, this set contains any profile bounded by any non-empty subset of A, included the potentially infinite set of losers collectively bounded by profiles ganging together so as to each beat the loser on some of the possible constraint rankings while leaving none uncovered. We will show that any set of losers collectively bounded by a profile set A is equivalent to the set of losers bounded by a single designated minimal profile directly identifiable from our knowledge of A, and which will call the ‘bounding minimum’ of A. Consequently, L(A) itself becomes determinable on the basis of simple harmonic bounding via bounding minima alone, simplifying aspects of its computation. The result is fully general, and applies to any set of profiles A, independently of whether the members of A are all winners, or some of them are themselves turned into losers by some other fellow profiles.

28 citations


Patent
05 Mar 2002
TL;DR: In this paper, a set of first values is calculated relative to coordinates of a reference point of the reference minimum bounding shape such that each of the first values comprises a first number of significant bits fewer than a second number representing a second value associated with a corresponding one of absolute coordinates of the point.
Abstract: The apparatuses and methods described herein may operate to identify, from an index structure stored in memory, a reference minimum bounding shape that encloses at least one minimum bounding shape. Each of the at least one minimum bounding shape may correspond to a data object associated with a leaf node of the index structure. Coordinates of a point of the at least one minimum bounding shape may be associated with a set of first values to produce a relative representation of the at least one minimum bounding shape. The set of first values may be calculated relative to coordinates of a reference point of the reference minimum bounding shape such that each of the set of first values comprises a first number of significant bits fewer than a second number of significant bits representing a second value associated with a corresponding one of absolute coordinates of the point.

26 citations


Proceedings ArticleDOI
07 Aug 2002
TL;DR: This work presents the basic formalism of a Monte Carlo approach to sampling a functionally propagated general random set, as opposed to a random interval, and shows that bounding and convergence properties are achieved.
Abstract: We are interested in improving risk and reliability analysis of complex systems where our knowledge of system performance is provided by large simulation codes, and where moreover input parameters are known only imprecisely. Such imprecision lends itself to interval representations of parameter values, and thence to quantifying our uncertainty through Dempster-Shafer or Probability Bounds representations on the input space. In this context, the simulation code acts as a large "black box" function f, transforming one input Dempster-Shafer structure on the line into an output random interval f(A). Our quantification of output uncertainty is then based on this output random interval.. If some properties of f are known, then some information about f(A) can be determined. But when f is a pure black box, we must resort to sampling approaches. We present the basic formalism of a Monte Carlo approach to sampling a functionally propagated general random set, as opposed to a random interval. We show that the results of straightforward formal definitions are mathematically coherent, in the sense that bounding and convergence properties are achieved.

14 citations


Proceedings ArticleDOI
TL;DR: A value-driven graph search technique that is capable of generating a rich variety of single and multiple vehicle behaviors and techniques for collapsing a multidimensional model space into a cost space and planning graph constraints are discussed.
Abstract: In this paper, we will describe a value-driven graph search technique that is capable of generating a rich variety of single and multiple vehicle behaviors. The generation of behaviors depends on cost and benefit computations that may involve terrain characteristics, line of sight to enemy positions, and cost, benefit, and risk of traveling on roads. Depending on mission priorities and cost values, real-time planners can autonomously build appropriate behaviors on the fly that include road following, cross-country movement, stealthily movement, formation keeping, and bounding overwatch. This system follows NIST's 4D/RCS architecture, and a discussion of the world model, value judgment, and behavior generation components is provided. In addition, techniques for collapsing a multidimensional model space into a cost space and planning graph constraints are discussed. The work described in this paper has been performed under the Army Research Laboratory's Robotics Demo III program.

12 citations


Proceedings Article
07 Aug 2002
TL;DR: This paper introduces a method for approximating the solution to inference and optimization tasks in uncertain and deterministic reasoning, and effectively maps such a dense problem to a sparser one which is in some sense "closest".
Abstract: In this paper, we introduce a method for approximating the solution to inference and optimization tasks in uncertain and deterministic reasoning Such tasks are in general intractable for exact algorithms because of the large number of dependency relationships in their structure Our method effectively maps such a dense problem to a sparser one which is in some sense "closest" Exact methods can be run on the sparser problem to derive bounds on the original answer, which can be quite sharp On one large CPCS network, for example, we were able to calculate upper and lower bounds on the conditional probability of a variable, given evidence, that were almost identical in the average case

11 citations


Journal ArticleDOI
TL;DR: The problem of determining the class bounding sets has been studied in several papers whose results made it tempting to conjecture that a set S is class-bounding if and only if p ¬∈ S as discussed by the authors.
Abstract: Let S be a finite set of powers of p containing 1. It is known that for some choices of S, if P is a finite p-group whose set of character degrees is S, then the nilpotence class of P is bounded by some integer that depends on S, while for some other choices of S such an integer does not exist. The sets of the first type are called class bounding sets. The problem of determining the class bounding sets has been studied in several papers whose results made it tempting to conjecture that a set S is class bounding if and only if p ¬∈ S. In this article we provide a new approach to this problem. Our main result shows the relevance of certain p-adic space groups in this problem. With its help, we are able to prove some results that provide new class bounding sets. We also show that there exist non-class-bounding sets S such that p ¬∈ S.

9 citations


Journal Article
TL;DR: The Army's current attempt at digital command and control (C2) systems will allow better visualization of the battlefield than in the past, but a framework is needed to place its importance in perspective.
Abstract: DESPITE THE BEST EFFORTS of the staff, the plan was unraveling. The scouts reported the enemy moving forward into the security zone instead of staying where the situational template said they would defend from. This invalidated the projected direct and indirect fire plan. The task force commander would have to rely on his lead team commander to find the enemy then develop and issue verbal orders at that point. He felt helpless and unable to provide any other guidance to his lead commander. He was unable to visualize the changes that needed to occur to influence the battle later Battlefield visualization, a key component of battle command, is the process of visualizing the unit's current state and a future state (of mission success), formulating concepts of operations to get from one to the other at least cost, and articulating this sequence in intent and guidance.1 The Army's current attempt at digital command and control (C2) systems will allow better visualization of the battlefield than in the past. As commander of the 1-22 Infantry Battalion, 4th Infantry Division (ID) (Mechanized (M)), I had the opportunity to test and field Force XXI Battle Command Brigade and Below (FBCB2), which is a digital Battle Command Brigade and Below Control System. FBCB2 uses information-age technology to enable soldiers to receive, compare, and transmit situational awareness (SA) information more quickly than was previously possible and to send and receive C2 messages. FBCB2 transmits and receives data across the wireless Fixed Tactical Internet (FTI) via the Enhanced Position Location Reporting System (EPLARS) data radio and Single Channel Ground Air Radio System. Each FBCB2 derives its own location via the precision lightweight global positioning system receiver. Through these interfaces, the FBCB2 automatically updates and broadcasts its current location to all other FBCB2-equipped platforms. These radios also transmit and receive C2 messages such as orders, overlays, and reports. The FBCB2 computer is the heart of the system and comes with a keyboard, touch-sensitive screen, and removable hard-disk drive. The system is located inside the vehicle next to the platform commander. To describe the power of visualization that FBCB2 brings to battalion- and company-level units, a framework is needed to place its importance in perspective. Combat power and its elements provide this framework. Combat Power and Visualization Combat power is a commonly used term that describes the resource that commanders use to accomplish the mission. Field Manual (FM) 101-5-1, Operational Terms and Graphics, defines combat power as "the total means of destructive and/or disruptive force that a military unit/formation can apply against the opponent at a given time-a combination of the effects of maneuver, firepower, protection, and leadership."2 Field Manual 3-0, Operations, adds information as an element of combat power.3 Maneuver. Field Manual 3-0 describes maneuver as "the employment of forces, through movement combined with fire or fire potential, to achieve a position of advantage with respect to the enemy to accomplish the mission. Maneuver is the means by which commanders concentrate combat power to achieve surprise, shock, momentum, and dominance."4 FBCB2 allows the commander to visualize the effects of terrain, to plan for distributed movement and maneuver, and to monitor execution. The value of FBCB2 is particularly apparent in two instances of maneuver: the transition from movement to maneuver and the rapid concentration of forces. Using the FBCB2 enemy situational template and the circular line-of-sight tool, leaders can visualize the enemy's maximum engagement line and determine the location of the phase line that triggers the change in movement techniques from traveling or traveling overwatch to bounding overwatch. The commander can monitor the progress and formation of subordinate elements and view the transition as units make the appropriate changes. …

5 citations


Journal ArticleDOI
TL;DR: The approach consists to define "bounding" weighted round robin (WRR)-related policies for some GPS-related policies: under the same arrival process, the departure instants under the bounding policy are later or equal to the departures under the FQ policy for each cell.

01 Jan 2002
TL;DR: A method in which the bounds of every variable in the solution are estimated beforehand according to the constrained conditions for solving nonlinear programming using genetic algorithm is presented.
Abstract: A method for solving nonlinear programming using genetic algorithm is presented. In the operations of crossover and mutation in each generation, to ensure the new solutions are all feasible, we present a method in which the bounds of every variable in the solution are estimated beforehand according to the constrained conditions. For the operation of mutation, we present two methods of cube bounding and variable bounding. The experimental results are given and analyzed. They show that the method is efficient and can obtain the results in less generation.

Journal Article
TL;DR: In this paper, the authors proposed an iterative, asymptotical convergence theorems for a given sequence of real numbers n = 1 for matrix analysis, measurement data processing and Monte Carlo methods.
Abstract: For the given arbitrary sequence of real numbers {xi} n=1 we construct several lower and upper bound converging sequences. Our goal is to localize the absolute value of the sequence maximum. Also we can calculate the value of such numbers. Since the proposed algorithms are iterative, asymptotical convergence theorems are proved. The presented task seems to be pointless from the ordinary point of view, but we illustrate its importance for a set of applied problems: matrix analysis, measurement data processing and Monte Carlo methods. According to the modern conception of fault tolerant computations, also known as "interval analysis", these results could also be treated as a part of interval mathematics.


Book ChapterDOI
02 Sep 2002
TL;DR: In this article, the problem of choosing these boolean bounding predicates is shown to be NP-complete and several heuristics for tuning the predicates on an index node are presented.
Abstract: Tree-based multidimensional indexes are integral to efficient querying in multimedia and GIS applications. These indexes frequently use shapes in internal tree nodes to describe the data stored in a subtree below. We show that the standard Minimum Bounding Rectangle descriptor can lead to significant inefficiency during tree traversal, due to false positives. We also observe that there is often space in internal nodes for richer, more accurate descriptors than rectangles. We propose exploiting this free space to form subtree predicates based on simple boolean combinations of standard descriptors such as rectangles. Since the problem of choosing these boolean bounding predicates is NP-complete, we implemented and tested several heuristics for tuning the bounding predicates on an index node, and several heuristics for deciding which nodes in the index to improve when available tuning time is limited. We present experiments over a variety of real and synthetic data sets, examining the performance benefit of the various tuning heuristics. Our experiments show that up to 50% of the unnecessary I/Os caused by imprecise subtree predicates can be eliminated using the boolean bounding predicates chosen by our algorithms.

01 Jan 2002
TL;DR: This thesis tests methods for applying Partial Order Bounding (POB), based on the theory of partially ordered sets and null window search, to the game Connect 4 and discusses the issues involved in partial order evaluation and partial order bounding in Connect 4.
Abstract: Game playing is of interest to artificial intelligence researchers because of the complexity involved in its exponential search. Standard exhaustive search algorithms available for graphs can be used for only the most trivial of games. The methods for dealing with this complexity are the use of heuristic evaluation functions and game tree pruning. Game tree pruning starts with the fundamental two-person game-playing algorithm called minimax and then applies pruning methods, such as Alpha-Beta pruning, to the resulting game tree. Researchers have devised new methods to enhance the minimax search algorithm. Two current approaches have emerged called Partial Order Bounding (POB), based on the theory of partially ordered sets and null window search, and game decomposition. POB compares partially ordered values in the leaves of a game tree, and backs up boolean values through the tree to the root. The advantage of POB compared to traditional weighted sum evaluation methods is that it avoids the problem of dealing with incomparable evaluations. So far, POB has only been applied, with some success, to the game of Go. In order to determine the general applicability of POB, this thesis tests methods for applying POB to the game Connect 4. The issues involved in partial order evaluation and partial order bounding in Connect 4 are discussed, and the performance of a Connect 4 game-playing program using POB is compared to one using traditional methods. Initial ideas on how to apply game decomposition to Connect 4 are also presented.

Journal ArticleDOI
TL;DR: A two-stage algorithm for finding non- dominated subsets of partially ordered sets is established and a connection is then made with dimension reduction in time-dependent dynamic programming via the notion of a bounding label, a function that bounds the state-transition cost functions.
Abstract: In this paper a two-stage algorithm for finding non- dominated subsets of partially ordered sets is established. A connection is then made with dimension reduction in time-dependent dynamic programming via the notion of a bounding label, a function that bounds the state-transition cost functions. In this context, the computational burden is partitioned between a time-independent dynamic programming step carried out on the bounding label and a direct evaluation carried out on a subset of “real" valued decisions. A computational application to time-dependent fuzzy dynamic programming is presented.