scispace - formally typeset
Search or ask a question

Showing papers on "Bounding overwatch published in 2016"


Journal ArticleDOI
TL;DR: A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems that have to consider a large number of operating subproblems, each of which is a convex optimization.

81 citations


Journal ArticleDOI
TL;DR: In this article, the problem of finding a hyper-pyramid to bound the set of states that are reachable from the origin in the Euclidean space is addressed, subject to inputs whose ( 1, 1 ) -norm or ( ∞, 1 ) −norm is bounded by a prescribed constant.

44 citations


Posted Content
TL;DR: A new class of dependence measures is introduced which retain key properties of mutual information while more effectively quantifying the exploration bias for heavy tailed distributions.
Abstract: We propose a framework to analyze and quantify the bias in adaptive data analysis. It generalizes that proposed by Russo and Zou'15, applying to measurements whose moment generating function exists, measurements with a finite $p$-norm, and measurements in general Orlicz spaces. We introduce a new class of dependence measures which retain key properties of mutual information while more effectively quantifying the exploration bias for heavy tailed distributions. We provide examples of cases where our bounds are nearly tight in situations where the original framework of Russo and Zou'15 does not apply.

28 citations


Journal ArticleDOI
TL;DR: Improvements using Taylor-based bounding techniques based on Taylor models with either interval or ellipsoidal bounds as their remainder terms can significantly reduce the computational burden, both in terms of iteration count and CPU time.
Abstract: This paper is concerned with guaranteed parameter estimation of nonlinear dynamic systems in a context of bounded measurement error. The problem consists of finding—or approximating as closely as possible—the set of all possible parameter values such that the predicted values of certain outputs match their corresponding measurements within prescribed error bounds. A set-inversion algorithm is applied, whereby the parameter set is successively partitioned into smaller boxes and exclusion tests are performed to eliminate some of these boxes, until a given threshold on the approximation level is met. Such exclusion tests rely on the ability to bound the solution set of the dynamic system for a finite parameter subset, and the tightness of these bounds is therefore paramount; equally important in practice is the time required to compute the bounds, thereby defining a trade-off. In this paper, we investigate such a tradeoff by comparing various bounding techniques based on Taylor models with either interval or ellipsoidal bounds as their remainder terms. We also investigate the use of optimization-based domain reduction techniques in order to enhance the convergence speed of the set-inversion algorithm, and we implement simple strategies that avoid recomputing Taylor models or reduce their expansion orders wherever possible. Case studies of various complexities are presented, which show that these improvements using Taylor-based bounding techniques can significantly reduce the computational burden, both in terms of iteration count and CPU time.

27 citations


Patent
06 Jan 2016
TL;DR: In this article, a pose detector identifies coordinates in the image corresponding to locations on the persons body, such as the waist, head, hands, and feet of the person, and a convolutional neural network receives the portions of the input image defined by the bounding boxes and generates a feature vector for each image portion, which are input to one or more SVM classifiers, which generate an output representing a probability of a match with an item.
Abstract: For an input image of a person, a set of object proposals are generated in the form of bounding boxes. A pose detector identifies coordinates in the image corresponding to locations on the persons body, such as the waist, head, hands, and feet of the person. A convolutional neural network receives the portions of the input image defined by the bounding boxes and generates a feature vector for each image portion. The feature vectors are input to one or more support vector machine classifiers, which generate an output representing a probability of a match with an item. The distance between the bounding box and a joint associated with the item is used to modify the probability. The modified probabilities for the support vector machine are then compared with a threshold and each other to identify the item.

26 citations


Book ChapterDOI
04 Apr 2016
TL;DR: This work shows that TFA actually can bring significant benefits in bounding the burstiness of cross-traffic and gives TFA’s existence a purpose finally, as it is known to be inferior to other methods for the overall delay analysis.
Abstract: Network calculus provides a mathematical framework for deterministically bounding backlog and delay in packet-switched networks. The analysis is compositional and proceeds in several steps. In the first step, a general feed-forward network is reduced to a tandem of servers lying on the path of the flow of interest. This requires to derive bounds on the cross-traffic for that flow. Tight bounds on cross-traffic are crucial for the overall analysis to obtain tight performance bounds. In this paper, we contribute an improvement on this first bounding step in a network calculus analysis. This improvement is based on the so-called total flow analysis (TFA), which so far saw little usage as it is known to be inferior to other methods for the overall delay analysis. Yet, in this work we show that TFA actually can bring significant benefits in bounding the burstiness of cross-traffic. We investigate analytically and numerically when these benefits actually occur and show that they can be considerable with several flows’ delays being improved by more than 40 % compared to existing methods – thus giving TFA’s existence a purpose finally.

25 citations


Journal ArticleDOI
TL;DR: Through sums-of-squares programming, formally verified estimates of the domain of attraction of stable fixed points are employed to realize stable speed transitions by switching among different bounding gaits in a sequential fashion.
Abstract: This paper studies quadrupedal bounding in the presence of flexible torso and compliant legs with non-trivial inertia, and it proposes a method for speed transitions by sequentially composing locally stable bounding motions corresponding to different speeds. First, periodic bounding motions are generated simply by positioning the legs during flight via suitable (virtual) holonomic constraints imposed on the leg angles; at this stage, no control effort is developed on the support legs, producing efficient, nearly passive, bounding gaits. The resulting motions are then stabilized by a hybrid control law which coordinates the movement of the torso and legs in continuous time, and updates the leg touchdown angles in an event-based fashion. Finally, through sums-of-squares programming, formally verified estimates of the domain of attraction of stable fixed points are employed to realize stable speed transitions by switching among different bounding gaits in a sequential fashion.

25 citations


Journal ArticleDOI
TL;DR: New delay-range-dependent conditions are derived ensuring that all state trajectories of the Markovian jump systems are mean square bounded, derived in terms of matrix inequalities which can be computationallysolved to find the smallest possible bound of mean square.
Abstract: This paper concerns with the problem of state bounding for a class of discrete-time Markovian jump systems with interval time-varying delay. By constructing a new Lyapunov–Krasovskii functional combining with the delay-decomposition technique and the reciprocally convex approach, new delay-range-dependent conditions are derived ensuring that all state trajectories of the system are mean square bounded. These conditions are derived in terms of matrix inequalities which can be computationallysolved to find a smallest possible bound of mean square. A numerical example is provided to verify the effectiveness of the obtained result.

24 citations


Posted Content
TL;DR: This paper defines noninformative and informative bounding procedures in the sanitization of bounded data, depending on whether a bounding procedure itself leaks original information or not, and formalizes the differentially private truncated and boundary inflated truncated mechanisms that release bounded statistics.
Abstract: Protection of individual privacy is a common concern when releasing and sharing data and information. Differential privacy (DP) formalizes privacy in probabilistic terms without making assumptions about the background knowledge of data intruders, and thus provides a robust concept for privacy protection. Practical applications of DP involve development of differentially private mechanisms to generate sanitized results at a pre-specified privacy budget. In the sanitization of bounded statistics such as proportions and correlation coefficients, the bounding constraints will need to be incorporated in the differentially private mechanisms. There has been little work in examining the consequences of the incorporation of of bounding constraints on the accuracy of sanitized results from a differentially private mechanism. In this paper, we define noninformative and informative bounding procedures in the sanitization of bounded data, depending on whether a bounding procedure itself leaks original information or not. We formalize the differentially private truncated and boundary inflated truncated (BIT) mechanisms that release bounded statistics. The impacts of the noninformative truncated and BIT mechanisms on the statistical validity of sanitized statistics, including bias and consistency, in the framework of the Laplace mechanism are evaluated both theoretically and empirically via simulation studies.

11 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a control method that can drive the cheetah robot running in passive bounding gait, which can achieve a stable bounding motion at different speeds with the proposed control method.

10 citations


Proceedings ArticleDOI
16 May 2016
TL;DR: This work confirms that the complex gait behavior of the robot can be initiated by adopting the simple reduced-order model as the control target for behavioral guidance.
Abstract: We report on the development of a robot's dynamic bounding locomotion based on a sagittal plane model that fits the robot's natural dynamics. The proposed model, referred to as the Two-rolling-leg (TRL) model, is a low degree-of-freedom planar rigid-body model with two rolling and compliant legs. The numerical fixed point analyses of the model suggest that with the adequate touchdown conditions of the model, passive dynamic and periodic behaviors of bounding is achievable. The passive dynamic behaviors of the model then serve as a guidance to initiate bounding on a quadruped. The experimental results show that though several un-modeled factors cause the behavioral discrepancy between the TRL model and the robot, the robot can still initiate its dynamic bounding behaviors by mapping the corresponding leg trajectories of the model on the empirical robot using a simple pre-set position control strategy without any state feedback. This work confirms that the complex gait behavior of the robot can be initiated by adopting the simple reduced-order model as the control target for behavioral guidance.

Posted Content
TL;DR: This paper defines noninformative and informative bounding procedures in sanitization of bounded data, depending on whether a bounding procedure itself leaks original information or not, and introduces differentially private truncated and boundary inflated truncated mechanisms with bounding constraints, and applies them in the framework of the Laplace mechanism.
Abstract: Protection of individual privacy is a common concern when releasing and sharing data and information. Differential privacy (DP) formalizes privacy in mathematical terms without making assumptions about the background knowledge of data intruders and thus provides a robust concept for privacy protection. Practical applications of DP involve development of differentially private mechanisms to generate sanitized results without compromising individual privacy at a pre-specified privacy budget. Differentially private mechanisms make the most sense for sanitizing bounded data in general from the data utility perspective. In this paper, we define noninformative and informative bounding procedures in sanitization of bounded data, depending on whether a bounding procedure itself leaks original information or not. We introduce differentially private truncated and boundary inflated truncated (BIT) mechanisms with bounding constraints, and apply them in the framework of the Laplace mechanism. The impacts of the two noninformative bounding procedures on the accuracy and statistical validity of sanitized results are evaluated both theoretically and empirically, in terms of bias and consistency relative to their original values and to the underlying true parameters when the statistics are estimators of some parameters.

Journal ArticleDOI
TL;DR: This work proposes an alternative method to interpolate bounding box annotations, based on cubic splines and the geometric properties of the elements involved, rather than image processing techniques, which can be integrated seamlessly with any annotation tool already developed.
Abstract: In video annotation, instead of annotating every frame of a trajectory, usually only a sparse set of annotations is provided by the user: typically its endpoints plus some key intermediate frames, interpolating the remaining annotations between these key frames in order to reduce the cost of the video labeling. While a number of video annotation tools have been proposed, some of which are freely available, and bounding box interpolation is mainly based on image processing techniques whose performance is highly dependent on image quality, occlusions, etc. We propose an alternative method to interpolate bounding box annotations, based on cubic splines and the geometric properties of the elements involved, rather than image processing techniques. The algorithm proposed is compared with other bounding box interpolation methods described in the literature, using a set of selected videos modeling different types of object and camera motion. Experiments show that the accuracy of the interpolated bounding boxes is higher than the accuracy of the other evaluated methods, especially when considering rigid objects. The main goal of this paper is related with the bounding box interpolation step, and we believe that our design can be integrated seamlessly with any annotation tool already developed.

Proceedings ArticleDOI
01 Nov 2016
TL;DR: This framework is built around an inference engine similar to the probability hypothesis density (PHD) filter, where the state space consists of stochastic bounding boxes with constant velocity dynamics.
Abstract: This paper is concerned with a system for detecting and tracking multiple 3D bounding boxes based on information from multiple sensors. Our framework is built around an inference engine similar to the probability hypothesis density (PHD) filter, where the state space consists of stochastic bounding boxes with constant velocity dynamics. We outline measurement equations for two modalities (vision and radar). The result is a flexible inference system suitable for use on autonomous vehicles.

01 Jan 2016
TL;DR: This work provides a summary and some new results concerning bounds among some important probability metrics/distances that are used by statisticians and probabilists and shows that rates of convergence can strongly depend on the metric chosen.
Abstract: Summary When studying convergence of measures, an important issue is the choice of probability metric. We provide a summary and some new results concerning bounds among some important probability metrics/distances that are used by statisticians and probabilists. Knowledge of other metrics can provide a means of deriving bounds for another one in an applied problem. Considering other metrics can also provide alternate insights. We also give examples that show that rates of convergence can strongly depend on the metric chosen. Careful consideration is necessary when choosing a metric.

Posted Content
TL;DR: In this article, the problem of computing the arithmetic sum over a specific directed acyclic network that is not a tree was considered, and it was shown that upper bounding the computation rate is quite nontrivial.
Abstract: For zero-error function computation over directed acyclic networks, existing upper and lower bounds on the computation capacity are known to be loose. In this work we consider the problem of computing the arithmetic sum over a specific directed acyclic network that is not a tree. We assume the sources to be i.i.d. Bernoulli with parameter $1/2$. Even in this simple setting, we demonstrate that upper bounding the computation rate is quite nontrivial. In particular, it requires us to consider variable length network codes and relate the upper bound to equivalently lower bounding the entropy of descriptions observed by the terminal conditioned on the function value. This lower bound is obtained by further lower bounding the entropy of a so-called \textit{clumpy distribution}. We also demonstrate an achievable scheme that uses variable length network codes and in-network compression.

Patent
Jinglun Gao1, Ying Chen1, Lei Wang1, Ning Bi1
04 Oct 2016
TL;DR: In this paper, a content-adaptive bounding box merge engine was proposed to merge bounding boxes and their associated blobs in a video content analysis system, which can adapt its merging criteria according to the objects typically present in a scene.
Abstract: Provided are methods, apparatuses, and computer-readable medium for content-adaptive bounding box merging. A system using content-adaptive bounding box merging can adapt its merging criteria according to the objects typically present in a scene. When two bounding boxes overlap, the content-adaptive merge engine can consider the overlap ratio, and compare the merged bounding box against a minimum object size. The minimum object size can be adapted to the size of the blobs detected in the scene. When two bounding boxes do not overlap, the system can consider the horizontal and vertical distances between the bounding boxes. The system can further compare the distances against content-adaptive thresholds. Using a content-adaptive bounding box merge engine, a video content analysis system may be able to more accurately merge (or not merge) bounding boxes and their associated blobs.

Journal ArticleDOI
TL;DR: In this paper, a channel decoupling method is proposed to decompose wireless networks into decoupled multiple-access channels and broadcast channels, which can be extended easily to large networks with a complexity that grows linearly with the number of nodes.
Abstract: The framework of network equivalence theory developed by Koetter et al. introduces a notion of channel emulation to construct noiseless networks as upper (respectively, lower) bounding models, which can be used to calculate the outer (respectively, inner) bounds for the capacity region of the original noisy network. Based on the network equivalence framework, this paper presents scalable upper and lower bounding models for wireless networks with potentially many nodes. A channel decoupling method is proposed to decompose wireless networks into decoupled multiple-access channels and broadcast channels. The upper bounding model, consisting of only point-to-point bit pipes, is constructed by first extending the one-shot upper bounding models developed by Calmon et al. and then integrating them with network equivalence tools. The lower bounding model, consisting of both point-to-point and point-to-points bit pipes, is constructed based on a two-step update of the lower bounding models to incorporate the broadcast nature of wireless transmission. The main advantages of the proposed methods are their simplicity and the fact that they can be extended easily to large networks with a complexity that grows linearly with the number of nodes. It is demonstrated that the resulting upper and lower bounds can approach the capacity in some setups.

Journal ArticleDOI
23 Jun 2016
TL;DR: An effective lower bounding procedure based on solving a new integer programming model of the shift minimization personnel task scheduling problem outperforms those existing in the literature and consistently and rapidly yields high quality lower bounds that are necessary for the decision makers to assess the quality of the obtained schedules.
Abstract: This study considers the shift minimization personnel task scheduling problem, which is to assign a set of tasks with fixed start and finish times to a minimum number of workers from a heterogeneous workforce. An effective lower bounding procedure based on solving a new integer programming model of the problem is proposed for the problem. An extensive computational study on benchmark data sets reveals that the proposed lower bounding procedure outperforms those existing in the literature and consistently and rapidly yields high quality lower bounds that are necessary for the decision makers to assess the quality of the obtained schedules.

01 Jan 2016
TL;DR: Velocities using Oriented Bounding Boxes David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License.
Abstract: Velocities using Oriented Bounding Boxes David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. Created: March 2, 1999 Last Modified: March 2, 2008

Book ChapterDOI
29 May 2016
TL;DR: An approach to bounding the search space of the values vector and an algorithm for performing an exhaustive sampling of global products patterns using such bounds are proposed.
Abstract: In this paper we deal with a variant of the Multiple Stock Size Cutting Stock Problem (MSSCSP) arising from population harvesting, in which some sets of large pieces of raw material (of different shapes) must be cut following certain patterns to meet customer demands of certain product types. The main extra difficulty of this variant of the MSSCSP lies in the fact that the available patterns are not known a priori. Instead, a given complex algorithm maps a vector of continuous variables called a values vector into a vector of total amounts of products, which we call a global products pattern. Modeling and solving this MSSCSP is not straightforward since the number of value vectors is infinite and the mapping algorithm consumes a significant amount of time, which precludes complete pattern enumeration. For this reason a representative sample of global products patterns must be selected. We propose an approach to bounding the search space of the values vector and an algorithm for performing an exhaustive sampling using such bounds. Our approach has been evaluated with real data provided by an industry partner.

Proceedings ArticleDOI
10 Jul 2016
TL;DR: This work considers the problem of computing the arithmetic sum over a specific directed acyclic network that is not a tree and demonstrates an achievable scheme that uses variable length network codes and in-network compression.
Abstract: For zero-error function computation over directed acyclic networks, existing upper and lower bounds on the computation capacity are known to be loose. In this work we consider the problem of computing the arithmetic sum over a specific directed acyclic network that is not a tree. We assume the sources to be i.i.d. Bernoulli with parameter 1/2. Even in this simple setting, we demonstrate that upper bounding the computation rate is quite nontrivial. In particular, it requires us to consider variable length network codes and relate the upper bound to equivalently lower bounding the entropy of descriptions observed by the terminal conditioned on the function value. This lower bound is obtained by further lower bounding the entropy of a so-called clumpy distribution. We also demonstrate an achievable scheme that uses variable length network codes and in-network compression.


Journal ArticleDOI
TL;DR: In this article, the authors introduce new possibilities of bounding the stability constants that play a vital role in the reduced basis method and show that by bounding stability constants over a neighborhood it is possible to guarantee stability at more than a finite number of points and to do that in the offline stage.

Proceedings ArticleDOI
11 Dec 2016
TL;DR: This method quantizes a model's output variables while allowing its internal variables to evolve by any suitable technique to bounds the global error in proportion to the quantization threshold for simulations of networks of stable, linear systems.
Abstract: This article presents a method for bounding errors that arise from interactions between components in a variety of simulation contexts. The proposed method combines key elements of the quantized state technique for numerical integration and the generalized discrete event system specification. Specifically, this method quantizes a model's output variables while allowing its internal variables to evolve by any suitable technique. This approach bounds the global error in proportion to the quantization threshold for simulations of networks of stable, linear systems. The proposed technique is particularly suitable for combining existing simulation models into federated, multi-rate simulations.

Patent
03 Aug 2016
TL;DR: In this paper, a bounding table dynamic stress test analysis method based on net surface finite element modeling has been proposed, which comprises the following steps of performing static load experiment and performing dynamic load experiment.
Abstract: The invention relates to a bounding table dynamic stress test analysis method based on net surface finite element modeling. The method comprises the following steps of performing static load experiment; performing dynamic load experiment; building a finite element model by aiming at the net surface of a bounding table; determining load limit through simulation of the finite element model of the net surface of the bounding table; obtaining the recommended jumping half period of the jumping action of a bounding table athlete through screening of the finite element model of the net surface of the bounding table. The method has the advantages that the analysis process is simpler and more convenient; the recommended jumping half period of the jumping action of the bounding table athlete can be fast obtained; the training of the bounding table athlete is favorably and better guided.

Proceedings ArticleDOI
01 Nov 2016
TL;DR: The algorithm proposed is an auto-adaptive version of TEC that plays with both extensions according to the outcomes of the upper and lower bounding phases, and shows a significant speed-up on several state-of-the-art instances.
Abstract: This paper presents a new interval-based operator for continuous constrained global optimization. It is built upon a new filtering operator, named TEC, which constructs a bounded subtree using a Branch and Contract process and returns the parallel-to-axes hull of the leaf domains/boxes. Two extensions of TEC use the information contained in the leaf boxes of the TEC subtree to improve the two bounds of the objective function value: (i) for the lower bounding, a polyhedral hull of the (projected) leaf boxes replaces the parallel-to-axes hull, (ii) for the upper bounding, a good feasible point is searched for inside a leaf box of the TEC subtree, following a look-ahead principle. The algorithm proposed is an auto-adaptive version of TEC that plays with both extensions according to the outcomes of the upper and lower bounding phases. Experimental results show a significant speed-up on several state-of-the-art instances.

Proceedings ArticleDOI
19 Sep 2016
TL;DR: This work evaluates the performance of a cloud system using a hysteresis queueing system with phase-type and batch arrivals, and proposes to use stochastic bounds and the performance measures and compare the proposed bounding models with the exact one.
Abstract: We evaluate the performance of a cloud system using ahysteresis queueing system with phase-type and batch arrivals. To represent the dynamic allocation of the resources, the hysteresis queueactivates and deactivates the virtual machines according to the threshold values of the queue length. We suppose a variable traffic intensity as the client requests (or jobs) arrive by batches, and follow a phase-type process. This system is represented by a complex Markov chain which is difficult to analyze, especially when the size of the state space increases and the length of batch arrival distribution is large. So, to solve this problem, we propose to use stochastic bounds and define bounding systems less complex. We give some results for the performance measures and compare the proposed bounding models with the exact one. The relevance of our methodology is to offer a trade-off betweencomputational complexity and accuracy of the results and provide very interesting solutions for network dimensioning.

Proceedings ArticleDOI
06 Jul 2016
TL;DR: By exploiting time-reversal symmetries possessed by the underlying vector fields, the corresponding Poincaré map can be factored in a way that allows the analytical formulation of conditions that are necessary for stability and sufficient for instability for bounding gaits in a sagittal-plane model of quadrupedal bounding.
Abstract: Reduced-order models with springy massless legs have been employed extensively in the study of legged locomotion. Due to their hybrid nonlinear dynamics, analysis of such systems is often carried out numerically. In contrast, this paper adopts an analytical approach to study conditions for stability in a sagittal-plane model of quadrupedal bounding. Exploiting time-reversal symmetries possessed by the underlying vector fields, the corresponding Poincare map can be factored in a way that allows the analytical formulation of conditions that are necessary for stability (and sufficient for instability) for bounding gaits. The method is then applied to facilitate the design of a leg recirculation controller for bounding gaits.

Book ChapterDOI
01 Jan 2016
TL;DR: The basic theorems for cost minimization and for DPs with an absorbing set of states and the basic theorem using reachable states are presented and proved.
Abstract: We present the basic theorems for cost minimization and for DPs with an absorbing set of states. We also prove the basic theorem using reachable states. The important notion of a bounding function is introduced.