scispace - formally typeset
Search or ask a question
Author

Anand Deo

Bio: Anand Deo is an academic researcher from Tata Institute of Fundamental Research. The author has contributed to research in topics: Estimator & Importance sampling. The author has an hindex of 3, co-authored 10 publications receiving 27 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors highlight the usefulness of city-scale agent-based simulators in studying various non-pharmaceutical interventions to manage an evolving pandemic and demonstrate the power of the simulator via several exploratory case studies in two metropolises.
Abstract: We highlight the usefulness of city-scale agent-based simulators in studying various non-pharmaceutical interventions to manage an evolving pandemic. We ground our studies in the context of the COVID-19 pandemic and demonstrate the power of the simulator via several exploratory case studies in two metropolises, Bengaluru and Mumbai. Such tools may in time become a common-place item in the tool kit of the administrative authorities of large cities.

20 citations

Journal ArticleDOI
TL;DR: In this article, the authors highlight the usefulness of city-scale agent-based simulators in studying various non-pharmaceutical interventions to manage an evolving pandemic and demonstrate the power of the simulator via several exploratory case studies in two metropolises.
Abstract: We highlight the usefulness of city-scale agent-based simulators in studying various non-pharmaceutical interventions to manage an evolving pandemic. We ground our studies in the context of the COVID-19 pandemic and demonstrate the power of the simulator via several exploratory case studies in two metropolises, Bengaluru and Mumbai. Such tools become common-place in any city administration's tool kit in our march towards digital health.

10 citations

Proceedings ArticleDOI
14 Dec 2020
TL;DR: In this article, a new formula for approximating CVaR-based optimization objectives and their gradients from limited samples is developed, which exploits the self-similarity of heavy-tailed distributions to extrapolate data from suitable lower quantiles.
Abstract: Motivated by the prominence of Conditional Value-at-Risk (CVaR) as a measure for tail risk in settings affected by uncertainty, we develop a new formula for approximating CVaR based optimization objectives and their gradients from limited samples. Unlike the state-of-the-art sample average approximations which require impractically large amounts of data in tail probability regions, the proposed approximation scheme exploits the self-similarity of heavy-tailed distributions to extrapolate data from suitable lower quantiles. The resulting approximations are shown to be statistically consistent and are amenable for optimization by means of conventional gradient descent. The approximation is guided by means of a systematic importance-sampling scheme whose asymptotic variance reduction properties are rigorously examined. Numerical experiments demonstrate the superiority of the proposed approximations and the ease of implementation points to the versatility of settings to which the approximation scheme can be applied.

5 citations

Posted Content
TL;DR: A new formula for approximating CVaR based optimization objectives and their gradients from limited samples is developed and is guided by means of a systematic importance-sampling scheme whose asymptotic variance reduction properties are rigorously examined.
Abstract: Motivated by the prominence of Conditional Value-at-Risk (CVaR) as a measure for tail risk in settings affected by uncertainty, we develop a new formula for approximating CVaR based optimization objectives and their gradients from limited samples. A key difficulty that limits the widespread practical use of these optimization formulations is the large amount of data required by the state-of-the-art sample average approximation schemes to approximate the CVaR objective with high fidelity. Unlike the state-of-the-art sample average approximations which require impractically large amounts of data in tail probability regions, the proposed approximation scheme exploits the self-similarity of heavy-tailed distributions to extrapolate data from suitable lower quantiles. The resulting approximations are shown to be statistically consistent and are amenable for optimization by means of conventional gradient descent. The approximation is guided by means of a systematic importance-sampling scheme whose asymptotic variance reduction properties are rigorously examined. Numerical experiments demonstrate the superiority of the proposed approximations and the ease of implementation points to the versatility of settings to which the approximation scheme can be applied.

5 citations

Posted Content
TL;DR: In this article, the authors consider discrete default intensity based and logit type reduced form models for conditional default probabilities for corporate loans where they develop simple closed form approximations to the maximum likelihood estimator (MLE) when the underlying covariates follow a stationary Gaussian process.
Abstract: We consider discrete default intensity based and logit type reduced form models for conditional default probabilities for corporate loans where we develop simple closed form approximations to the maximum likelihood estimator (MLE) when the underlying covariates follow a stationary Gaussian process. In a practically reasonable asymptotic regime where the default probabilities are small, say 1-3% annually, the number of firms and the time period of data available is reasonably large, we rigorously show that the proposed estimator behaves similarly or slightly worse than the MLE when the underlying model is correctly specified. For more realistic case of model misspecification, both estimators are seen to be equally good, or equally bad. Further, beyond a point, both are more-or-less insensitive to increase in data. These conclusions are validated on empirical and simulated data. The proposed approximations should also have applications outside finance, where logit-type models are used and probabilities of interest are small.

2 citations


Cited by
More filters
Book ChapterDOI
01 Jan 2011
TL;DR: Weakconvergence methods in metric spaces were studied in this article, with applications sufficient to show their power and utility, and the results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables.
Abstract: The author's preface gives an outline: "This book is about weakconvergence methods in metric spaces, with applications sufficient to show their power and utility. The Introduction motivates the definitions and indicates how the theory will yield solutions to problems arising outside it. Chapter 1 sets out the basic general theorems, which are then specialized in Chapter 2 to the space C[0, l ] of continuous functions on the unit interval and in Chapter 3 to the space D [0, 1 ] of functions with discontinuities of the first kind. The results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables. " The book develops and expands on Donsker's 1951 and 1952 papers on the invariance principle and empirical distributions. The basic random variables remain real-valued although, of course, measures on C[0, l ] and D[0, l ] are vitally used. Within this framework, there are various possibilities for a different and apparently better treatment of the material. More of the general theory of weak convergence of probabilities on separable metric spaces would be useful. Metrizability of the convergence is not brought up until late in the Appendix. The close relation of the Prokhorov metric and a metric for convergence in probability is (hence) not mentioned (see V. Strassen, Ann. Math. Statist. 36 (1965), 423-439; the reviewer, ibid. 39 (1968), 1563-1572). This relation would illuminate and organize such results as Theorems 4.1, 4.2 and 4.4 which give isolated, ad hoc connections between weak convergence of measures and nearness in probability. In the middle of p. 16, it should be noted that C*(S) consists of signed measures which need only be finitely additive if 5 is not compact. On p. 239, where the author twice speaks of separable subsets having nonmeasurable cardinal, he means "discrete" rather than "separable." Theorem 1.4 is Ulam's theorem that a Borel probability on a complete separable metric space is tight. Theorem 1 of Appendix 3 weakens completeness to topological completeness. After mentioning that probabilities on the rationals are tight, the author says it is an

3,554 citations

Journal ArticleDOI
29 Jan 2021
TL;DR: In this article, the authors focus on the Indian city of Pune in the western state of Maharashtra and use the digital twin to simulate various what-if scenarios of interest to predict the spread of the virus; understand the effectiveness of candidate interventions; and predict the consequences of introduction of interventions possibly leading to trade-offs between public health, citizen comfort and economy.
Abstract: The COVID-19 epidemic created, at the time of writing the paper, highly unusual and uncertain socio-economic conditions The world economy was severely impacted and business-as-usual activities severely disrupted The situation presented the necessity to make a trade-off between individual health and safety on one hand and socio-economic progress on the other Based on the current understanding of the epidemiological characteristics of COVID-19, a broad set of control measures has emerged along dimensions such as restricting people’s movements, high-volume testing, contract tracing, use of face masks, and enforcement of social-distancing However, these interventions have their own limitations and varying level of efficacy depending on factors such as the population density and the socio-economic characteristics of the area To help tailor the intervention, we develop a configurable, fine-grained agent-based simulation model that serves as a virtual representation, ie, a digital twin of a diverse and heterogeneous area such as a city In this paper, to illustrate our techniques, we focus our attention on the Indian city of Pune in the western state of Maharashtra We use the digital twin to simulate various what-if scenarios of interest to (1) predict the spread of the virus; (2) understand the effectiveness of candidate interventions; and (3) predict the consequences of introduction of interventions possibly leading to trade-offs between public health, citizen comfort, and economy Our model is configured for the specific city of interest and used as an in-silico experimentation aid to predict the trajectory of active infections, mortality rate, load on hospital, and quarantine facility centers for the candidate interventions The key contributions of this paper are: (1) a novel agent-based model that seamlessly captures people, place, and movement characteristics of the city, COVID-19 virus characteristics, and primitive set of candidate interventions, and (2) a simulation-driven approach to determine the exact intervention that needs to be applied under a given set of circumstances Although the analysis presented in the paper is highly specific to COVID-19, our tools are generic enough to serve as a template for modeling the impact of future pandemics and formulating bespoke intervention strategies

14 citations

Proceedings ArticleDOI
01 Jan 2022
TL;DR: In this paper, the authors introduce a neural network architecture for robustly learning the distribution of traversability costs and show that this approach reliably learns the expected tail risk given a desired probability risk threshold between 0 and 1, producing a traversability costmap which is more robust to outliers, more accurately captures tail risks, and is more computationally efficient.
Abstract: One of the main challenges in autonomous robotic exploration and navigation in unknown and unstructured environments is determining where the robot can or cannot safely move. A significant source of difficulty in this determination arises from stochasticity and uncertainty, coming from localization error, sensor sparsity and noise, difficult-to-model robot-ground interactions, and disturbances to the motion of the vehicle. Classical approaches to this problem rely on geometric analysis of the surrounding terrain, which can be prone to modeling errors and can be computationally expensive. Moreover, modeling the distribution of uncertain traversability costs is a difficult task, compounded by the various error sources mentioned above. In this work, we take a principled learning approach to this problem. We introduce a neural network architecture for robustly learning the distribution of traversability costs. Because we are motivated by preserving the life of the robot, we tackle this learning problem from the perspective of learning tail-risks, i.e. the conditional value-at-risk (CVaR). We show that this approach reliably learns the expected tail risk given a desired probability risk threshold between 0 and 1, producing a traversability costmap which is more robust to outliers, more accurately captures tail risks, and is more computationally efficient, when compared against baselines. We validate our method on data collected by a legged robot navigating challenging, unstructured environments including an abandoned subway, limestone caves, and lava tube caves.

13 citations