scispace - formally typeset
Search or ask a question
Topic

Bounding overwatch

About: Bounding overwatch is a research topic. Over the lifetime, 966 publications have been published within this topic receiving 15156 citations.


Papers
More filters
25 Jan 2020
TL;DR: A novel weakly supervised learning segmentation based on several global constraints derived from box annotations is proposed, leveraging a classical tightness prior to a deep learning setting via imposing a set of constraints on the network outputs.
Abstract: We propose a novel weakly supervised learning segmentation based on several global constraints derived from box annotations. Particularly, we leverage a classical tightness prior to a deep learning setting via imposing a set of constraints on the network outputs. Such a powerful topological prior prevents solutions from excessive shrinking by enforcing any horizontal or vertical line within the bounding box to contain, at least, one pixel of the foreground region. Furthermore, we integrate our deep tightness prior with a global background emptiness constraint, guiding training with information outside the bounding box. We demonstrate experimentally that such a global constraint is much more powerful than standard cross-entropy for the background class. Our optimization problem is challenging as it takes the form of a large set of inequality constraints on the outputs of deep networks. We solve it with sequence of unconstrained losses based on a recent powerful extension of the log-barrier method, which is well-known in the context of interior-point methods. This accommodates standard stochastic gradient descent (SGD) for training deep networks, while avoiding computationally expensive and unstable Lagrangian dual steps and projections. Extensive experiments over two different public data sets and applications (prostate and brain lesions) demonstrate that the synergy between our global tightness and emptiness priors yield very competitive performances, approaching full supervision and outperforming significantly DeepCut. Furthermore, our approach removes the need for computationally expensive proposal generation. Our code is shared anonymously.

56 citations

Proceedings ArticleDOI
01 Jan 1993
TL;DR: Better techniques to lower bound the size of the minimum subgraphs are provided, which allows us to achieve approximation factors of a and $ respectively, thereby improving on existing algorithms that achieve NP-hard results.
Abstract: We consider the problems of finding minimum P-edge connected and P-vertex connected subgraphs in a given graph. These problems are NP-hard. We provide better techniques to lower bound the size of the minimum subgraphs. This allows us to achieve approximation factors of a and $ respectively, thereby improving on existing algorithms that achieve

55 citations

Proceedings ArticleDOI
Yoav Freund1
24 Jul 1998
TL;DR: A self-bounding learning algorithm is an algorithm which, in addition to the hypothesis that it outputs, outputs a reliable upper bound on the generalization error of this hypothesis.
Abstract: Most of the work which attempts to give bounds on the generalization error of the hypothesis generated by a learning algorithm is based on methods from the theory of uniform convergence. These bounds are a-priori bounds that hold for any distribution of examples and are calculated before any data is observed. In this paper we propose a different approach for bounding the generalization error after the data has been observed. A self-bounding learning algorithm is an algorithm which, in addition to the hypothesis that it outputs, outputs a reliable upper bound on the generalization error of this hypothesis. We first explore the idea in the statistical query learning framework of Keams [lo]. After that we give an explicit self bounding algorithm for learning algorithms that are based on local search. Permission to make digital or h,ard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for prolit or commercial advantage and that copies bear this notice and the full citation on the first page. To COPY otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific pemlission .and/or a fee. COLT 98 Madison WI USA Copyright ACM 1998 l-581 13-057--0/9X/ 7...$5.00

55 citations

Patent
19 Dec 1996
TL;DR: In this article, a system and method for assigning the vertices of an envelope to one or more elements of a skeleton in an animation model is presented, where bounding volumes are defined for skeleton elements and define effective volumes which are positioned relative to the skeleton elements to encompass one or multiple vertices.
Abstract: A system and method for assigning the vertices of an envelope to one or more elements of a skeleton in an animation model. Bounding volumes are defined for skeleton elements and define effective volumes which are positioned relative to the skeleton elements to encompass one or more vertices of envelopes. The bounding volume geometry may be defined as desired and a desired assignment operation type is selected for the bounding volume. When automated assignment is performed, assignment of the vertices within the bounding volumes is performed in accordance with the selected assignment operation for each bounding volume and with a selected maximum number of elements to which each vertex may be assigned. Bounding volumes may be overlapped to provide versatile automated assignment and the bounding volumes and corresponding assignment operations are stored with the skeleton and therefore are independent of the envelopes employed. Thus, the bounding volumes and assignment operations can be used both for the assignment of pre-production quality, low resolution, envelopes and for the subsequent assignment of production quality, high resolution, envelopes, as desired.

55 citations

Journal ArticleDOI
TL;DR: It is proved that all signals in the closed-loop system are semiglobally uniformly bounded and control errors converge to an adjustable neighborhood of the origin.

55 citations


Network Information
Related Topics (5)
Robustness (computer science)
94.7K papers, 1.6M citations
85% related
Optimization problem
96.4K papers, 2.1M citations
85% related
Matrix (mathematics)
105.5K papers, 1.9M citations
82% related
Nonlinear system
208.1K papers, 4M citations
81% related
Artificial neural network
207K papers, 4.5M citations
80% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023714
20221,629
2021155
202075
201973
201850