scispace - formally typeset
Search or ask a question
Topic

Bounding overwatch

About: Bounding overwatch is a research topic. Over the lifetime, 966 publications have been published within this topic receiving 15156 citations.


Papers
More filters
ReportDOI
TL;DR: In this paper, a bounding argument was proposed to replace the coefficient movement heuristic, which is informative only if selection on observables is proportional to selection on unobservables.
Abstract: A common heuristic for evaluating robustness of results to omitted variable bias is to look at coefficient movements after inclusion of controls. This heuristic is informative only if selection on observables is proportional to selection on unobservables. I formalize this link, drawing on theory in Altonji, Elder and Taber (2005) and show how, with this assumption, coefficient movements, along with movements in R-squared values, can be used to calculate omitted variable bias. I discuss empirical implementation and describe a formal bounding argument to replace the coefficient movement heuristic. I show two validation exercises suggesting that this bounding argument would perform well empirically. I discuss application of this procedure to a large set of publications in economics, and use evidence from randomized studies to draw guidelines as to appropriate bounding values.

39 citations

Journal ArticleDOI
TL;DR: It is suggested that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used.
Abstract: Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].

39 citations

Journal ArticleDOI
TL;DR: A new way to calculate interval analysis support functions for multiextremal univariate functions is presented, based on obtaining the same kind of information used in interval analysis global optimization algorithms, which enable the development of more powerful bounding, selection, and rejection criteria.
Abstract: The performance of interval analysis branch-and-bound global optimization algorithms strongly depends on the efficiency of selection, bounding, elimination, division, and termination rules used in their implementation. All the information obtained during the search process has to be taken into account in order to increase algorithm efficiency, mainly when this information can be obtained and elaborated without additional cost (in comparison with traditional approaches). In this paper a new way to calculate interval analysis support functions for multiextremal univariate functions is presented. The new support functions are based on obtaining the same kind of information used in interval analysis global optimization algorithms. The new support functions enable us to develop more powerful bounding, selection, and rejection criteria and, as a consequence, to significantly accelerate the search. Numerical comparisons made on a wide set of multiextremal test functions have shown that on average the new algorithm works almost two times faster than a traditional interval analysis global optimization method.

38 citations

01 Jan 2011
TL;DR: In this article, the authors derive Bernstein-like concentration inequalities for self-bounding functions using the Herbst argument, which involves comparison results between solutions of differential inequalities that may be interesting in their own right.
Abstract: We prove some new concentration inequalities for self-bounding functions using the entropy method. As an application, we recover Talagrand's convex distance inequality. The new Bernstein-like inequalities for self-bounding functions are derived thanks to a careful analysis of the so-called Herbst argument. The latter involves comparison results between solutions of differential inequalities that may be interesting in their own right.

38 citations

Proceedings Article
24 May 2019
TL;DR: It is shown that in general there is a “sweet spot” that depends on measurable properties of the dataset, but that there is also a concrete cost to privacy that cannot be avoided simply by collecting more data.
Abstract: Differentially private learning algorithms protect individual participants in the training dataset by guaranteeing that their presence does not significantly change the resulting model. In order to make this promise, such algorithms need to know the maximum contribution that can be made by a single user: the more data an individual can contribute, the more noise will need to be added to protect them. While most existing analyses assume that the maximum contribution is known and fixed in advance—indeed, it is often assumed that each user contributes only a single example— we argue that in practice there is a meaningful choice to be made. On the one hand, if we allow users to contribute large amounts of data, we may end up adding excessive noise to protect a few outliers, even when the majority contribute only modestly. On the other hand, limiting users to small contributions keeps noise levels low at the cost of potentially discarding significant amounts of excess data, thus introducing bias. Here, we characterize this trade-off for an empirical risk minimization setting, showing that in general there is a “sweet spot” that depends on measurable properties of the dataset, but that there is also a concrete cost to privacy that cannot be avoided simply by collecting more data.

37 citations


Network Information
Related Topics (5)
Robustness (computer science)
94.7K papers, 1.6M citations
85% related
Optimization problem
96.4K papers, 2.1M citations
85% related
Matrix (mathematics)
105.5K papers, 1.9M citations
82% related
Nonlinear system
208.1K papers, 4M citations
81% related
Artificial neural network
207K papers, 4.5M citations
80% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023714
20221,629
2021155
202075
201973
201850