scispace - formally typeset
Search or ask a question
Author

Swaroop Darbha

Bio: Swaroop Darbha is an academic researcher from Texas A&M University. The author has contributed to research in topics: Travelling salesman problem & Approximation algorithm. The author has an hindex of 28, co-authored 162 publications receiving 3767 citations. Previous affiliations of Swaroop Darbha include Air Force Research Laboratory & University of California, Berkeley.


Papers
More filters
Proceedings ArticleDOI
01 Dec 2006
TL;DR: Stochastic dynamic programming is shown to show that there is a threshold delay for each object and it is optimal to revisit the object if the operator delay is smaller than the threshold and not to revisit otherwise.
Abstract: In this paper, we consider a problem of sequential resource allocation. Such a problem arises in a simplified intelligence, surveillance and reconnaissance (ISR) scenario where a micro air vehicle (MAV) is tasked with search and classification in an environment with false targets. The MAV visits the objects of interest in a specified sequence for classification. A human operator aids classification of objects based on the images sent to him from the MAV and the operator may request that the object be revisited if he requires further information. Such a request is made at most once by the operator for each object. The information gained by the operator when any object is revisited is the same. There is a random delay in communicating his findings to the MAV and the probability density function of the delay is assumed known. The MAV has a finite fuel reserve and upon receiving the feedback from the operator, it must decide whether to revisit the object or whether to continue to the next object in the sequence. In every revisit, fuel is expended from the reserve and equals twice the delay plus a fixed fuel cost. The objective is to maximize the number of revisits so as to maximize the information gained about the objects, which enables them to be classified as targets or false targets. Using stochastic dynamic programming, we show that there is a threshold delay for each object and it is optimal to revisit the object if the operator delay is smaller than the threshold and not to revisit otherwise

11 citations

Proceedings ArticleDOI
25 Aug 2001
TL;DR: In this article, the authors present a methodology for assessing the benefits of coordination on the safety of a platoon during emergency braking, where every following vehicle brakes at its maximum possible deceleration in response to hard braking by the lead vehicle in the platoon, referred to as uncoordinated braking.
Abstract: We present a methodology for assessing the benefits of coordination on the safety of a platoon during emergency braking. When every following vehicle brakes at its maximum possible deceleration in response to hard braking by the lead vehicle in the platoon, it is referred to as uncoordinated braking. A braking strategy B is referred to be more beneficial than a strategy A, if strategy B leads to a larger reduction in the probability of a collision, the expected number of collisions, and the expected relative velocity at impact as compared to strategy A. The sequence of maximum deceleration of vehicles in the platoon is assumed to be a sequence of independent and identically distributed random variables; this distribution is assumed to be discrete and known. Due to coordination, the "effective" deceleration of a following vehicle may not necessarily be its maximum value. The problem of assessing the benefits of coordination can be initiated by dealing with determining the probability distribution of the "effective" deceleration of following vehicles during emergency braking. It is intuitive that the smaller the variance of this distribution, the greater the safety benefits are. We present a coordination strategy, which offers distinct safety benefits in the asymptotic cases.

11 citations

Proceedings ArticleDOI
10 Aug 2009
TL;DR: This paper addresses a base perimeter patrol scenario where alerts are generated from a set of stations at random intervals and a stochastic control optimization problem is developed to determine the optimal loiter time.
Abstract: This paper addresses a base perimeter patrol scenario where alerts are generated from a set of stations at random intervals. A Unmanned Aerial Vehicle patrols the perimeter and responds to alerts. After arriving at an alert site, the vehicle loiters for a time to enable the operator to determine if the alert is a nuisance trip or an actual threat. The false alarms are modeled as a Poisson process. A stochastic control optimization problem is developed to determine the optimal loiter time. The optimal length of time that a vehicle can dwell at an alert site while minimizing the expected service time is a function of the size of the alert queue and the alert rate. Results from where the algorithm was ∞ight tested as part of a base defense scenario is presented.

10 citations

Posted Content
TL;DR: In this paper, the authors make a clear distinction between traffic flow stability and string stability, and such a dis-tinction has not been recognized in the literature, thus far, thus they make their analysis without adding vehicles to or removing vehicles from the traffic.
Abstract: In analogy to the flow of fluids, it is expected that the aggregate density and the velocity of vehicles in a section of a freeway adequately describe the traffic flow dynamics. The conservation of mass equation together with the aggregation of the vehicle following dynamics of controlled vehicles describes the evolution of the traffic density and the aggregate speed of a traffic flow. There are two kinds of stability associated with traffic flow problems - string stability (or car-following stability) and traffic flow stability. We make a clear distinction between traffic flow stability and string stability, and such a dis- tinction has not been recognized in the literature, thus far. String stability is stability with respect to intervehicular spacing; intuitively, it ensures the knowledge of the position and velocity of every vehicle in the traffic, within reasonable bounds of error, from the knowledge of the position and velocity of a vehicle in the traffic. String stability is analyzed without adding vehicles to or removing vehicles from the traffic. On the other hand, traffic flow stability deals with the evolution of traffic velocity and density in response to the ad- dition and/or removal of vehicles from the flow. Traffic flow stability can be guaranteed only if the velocity and density solutions of the coupled set of equa- tions is stable, i.e., only if stability with respect to automatic vehicle following and stability with respect to density evolution is guaranteed. Therefore, the ow stability and critical capacity of any section of a highway is dependent not only on the vehicle following control laws and the information used in their synthesis, but also on the spacing policy employed by the control system. Such a dependence has practical consequences in the choice of a spacing policy for adaptive cruise control laws and on the stability of the traffic ow consisting of vehicles equipped with adaptive cruise control features on the existing and future highways. This critical dependence is the subject of investigation in this paper. This problem is analyzed in two steps: The first step is to understand the effect of spacing policy employed by the Intelligent Cruise Control (ICC) systems on traffic flow stability. The second step is to understand how the dynamics of ICC system affects traffic flow stability. Using such an analysis, it is shown that cruise control systems that employ a constant time headway policy lead to unacceptable characteristics for the traffic flows. Key Words: Intelligent Cruise Control Systems, Traffic Flow Stability, String Stability, Advanced Vehicle Control Systems, Advanced Traffic Management Systems.

10 citations

Proceedings ArticleDOI
18 Aug 2011
TL;DR: The reduced order DP has been shown analytically to give the exact same solution that one would obtain via performing DP on the original full state space Markov chain.
Abstract: A reduced order Dynamic Programming (DP) method that efficiently computes the optimal policy and value function for a class of controlled Markov chains is developed. We assume that the Markov chains exhibit the property that a subset of the states have a single (default) control action associated with them. Furthermore, we assume that the transition probabilities between the remaining (decision) states can be derived from the original Markov chain specification. Under these assumptions, the suggested reduced order DP method yields significant savings in computation time and also leads to faster convergence to the optimal solution. Most importantly, the reduced order DP has been shown analytically to give the exact same solution that one would obtain via performing DP on the original full state space Markov chain. The method is illustrated via a multi UAV perimeter patrol stochastic optimal control problem.

10 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A binary neutron star coalescence candidate (later designated GW170817) with merger time 12:41:04 UTC was observed through gravitational waves by the Advanced LIGO and Advanced Virgo detectors.
Abstract: On 2017 August 17 a binary neutron star coalescence candidate (later designated GW170817) with merger time 12:41:04 UTC was observed through gravitational waves by the Advanced LIGO and Advanced Virgo detectors. The Fermi Gamma-ray Burst Monitor independently detected a gamma-ray burst (GRB 170817A) with a time delay of $\sim 1.7\,{\rm{s}}$ with respect to the merger time. From the gravitational-wave signal, the source was initially localized to a sky region of 31 deg2 at a luminosity distance of ${40}_{-8}^{+8}$ Mpc and with component masses consistent with neutron stars. The component masses were later measured to be in the range 0.86 to 2.26 $\,{M}_{\odot }$. An extensive observing campaign was launched across the electromagnetic spectrum leading to the discovery of a bright optical transient (SSS17a, now with the IAU identification of AT 2017gfo) in NGC 4993 (at $\sim 40\,{\rm{Mpc}}$) less than 11 hours after the merger by the One-Meter, Two Hemisphere (1M2H) team using the 1 m Swope Telescope. The optical transient was independently detected by multiple teams within an hour. Subsequent observations targeted the object and its environment. Early ultraviolet observations revealed a blue transient that faded within 48 hours. Optical and infrared observations showed a redward evolution over ~10 days. Following early non-detections, X-ray and radio emission were discovered at the transient's position $\sim 9$ and $\sim 16$ days, respectively, after the merger. Both the X-ray and radio emission likely arise from a physical process that is distinct from the one that generates the UV/optical/near-infrared emission. No ultra-high-energy gamma-rays and no neutrino candidates consistent with the source were found in follow-up searches. These observations support the hypothesis that GW170817 was produced by the merger of two neutron stars in NGC 4993 followed by a short gamma-ray burst (GRB 170817A) and a kilonova/macronova powered by the radioactive decay of r-process nuclei synthesized in the ejecta.

2,746 citations

BookDOI
26 Jul 2009
TL;DR: This self-contained introduction to the distributed control of robotic networks offers a broad set of tools for understanding coordination algorithms, determining their correctness, and assessing their complexity; and it analyzes various cooperative strategies for tasks such as consensus, rendezvous, connectivity maintenance, deployment, and boundary estimation.
Abstract: This self-contained introduction to the distributed control of robotic networks offers a distinctive blend of computer science and control theory. The book presents a broad set of tools for understanding coordination algorithms, determining their correctness, and assessing their complexity; and it analyzes various cooperative strategies for tasks such as consensus, rendezvous, connectivity maintenance, deployment, and boundary estimation. The unifying theme is a formal model for robotic networks that explicitly incorporates their communication, sensing, control, and processing capabilities--a model that in turn leads to a common formal language to describe and analyze coordination algorithms.Written for first- and second-year graduate students in control and robotics, the book will also be useful to researchers in control theory, robotics, distributed algorithms, and automata theory. The book provides explanations of the basic concepts and main results, as well as numerous examples and exercises.Self-contained exposition of graph-theoretic concepts, distributed algorithms, and complexity measures for processor networks with fixed interconnection topology and for robotic networks with position-dependent interconnection topology Detailed treatment of averaging and consensus algorithms interpreted as linear iterations on synchronous networks Introduction of geometric notions such as partitions, proximity graphs, and multicenter functions Detailed treatment of motion coordination algorithms for deployment, rendezvous, connectivity maintenance, and boundary estimation

1,166 citations

Journal ArticleDOI
TL;DR: In this paper, the authors summarize the current knowledge of neutron-star masses and radii and show that the distribution of neutron star masses is much wider than previously thought, with three known pulsars now firmly in the 1.9-2.0-M⊙ mass range.
Abstract: We summarize our current knowledge of neutron-star masses and radii. Recent instrumentation and computational advances have resulted in a rapid increase in the discovery rate and precise timing of radio pulsars in binaries in the past few years, leading to a large number of mass measurements. These discoveries show that the neutron-star mass distribution is much wider than previously thought, with three known pulsars now firmly in the 1.9–2.0-M⊙ mass range. For radii, large, high-quality data sets from X-ray satellites as well as significant progress in theoretical modeling led to considerable progress in the measurements, placing them in the 10–11.5-km range and shrinking their uncertainties, owing to a better understanding of the sources of systematic errors. The combination of the massive-neutron-star discoveries, the tighter radius measurements, and improved laboratory constraints of the properties of dense matter has already made a substantial impact on our understanding of the composition and bulk p...

1,082 citations

Journal ArticleDOI
Edo Berger1
TL;DR: A review of nearly a decade of short gamma-ray bursts and their afterglow and host-galaxy observations is presented in this article, where the authors use this information to shed light on the nature and properties of their progenitors, the energy scale and collimation of the relativistic outflow, and the properties of the circumburst environments.
Abstract: Gamma-ray bursts (GRBs) display a bimodal duration distribution with a separation between the short- and long-duration bursts at about 2 s. The progenitors of long GRBs have been identified as massive stars based on their association with Type Ic core-collapse supernovae (SNe), their exclusive location in star-forming galaxies, and their strong correlation with bright UV regions within their host galaxies. Short GRBs have long been suspected on theoretical grounds to arise from compact object binary mergers (neutron star–neutron star or neutron star–black hole). The discovery of short GRB afterglows in 2005 provided the first insight into their energy scale and environments, as well as established a cosmological origin, a mix of host-galaxy types, and an absence of associated SNe. In this review, I summarize nearly a decade of short GRB afterglow and host-galaxy observations and use this information to shed light on the nature and properties of their progenitors, the energy scale and collimation of the relativistic outflow, and the properties of the circumburst environments. The preponderance of the evidence points to compact object binary progenitors, although some open questions remain. On the basis of this association, observations of short GRBs and their afterglows can shed light on the on- and off-axis electromagnetic counterparts of gravitational wave sources from the Advanced LIGO/Virgo experiments.

1,061 citations

Journal ArticleDOI
02 Nov 2017-Nature
TL;DR: The ejected mass and a merger rate inferred from GW170817 imply that such mergers are a dominant mode of r-process production in the Universe.
Abstract: Modelling the electromagnetic emission of kilonovae enables the mass, velocity and composition (with some heavy elements) of the ejecta from a neutron-star merger to be derived from the observations. Merging neutron stars are potential sources of gravitational waves and have long been predicted to produce jets of material as part of a low-luminosity transient known as a 'kilonova'. There is growing evidence that neutron-star mergers also give rise to short, hard gamma-ray bursts. A group of papers in this issue report observations of a transient associated with the gravitational-wave event GW170817—a signature of two neutron stars merging and a gamma-ray flash—that was detected in August 2017. The observed gamma-ray, X-ray, optical and infrared radiation signatures support the predictions of an outflow of matter from double neutron-star mergers and present a clear origin for gamma-ray bursts. Previous predictions differ over whether the jet material would combine to form light or heavy elements. These papers now show that the early part of the outflow was associated with lighter elements whereas the later observations can be explained by heavier elements, the origins of which have been uncertain. However, one paper (by Stephen Smartt and colleagues) argues that only light elements are needed for the entire event. Additionally, Eleonora Troja and colleagues report X-ray observations and radio emissions that suggest that the 'kilonova' jet was observed off-axis, which could explain why gamma-ray-burst detections are seen as dim. The cosmic origin of elements heavier than iron has long been uncertain. Theoretical modelling1,2,3,4,5,6,7 shows that the matter that is expelled in the violent merger of two neutron stars can assemble into heavy elements such as gold and platinum in a process known as rapid neutron capture (r-process) nucleosynthesis. The radioactive decay of isotopes of the heavy elements is predicted8,9,10,11,12 to power a distinctive thermal glow (a ‘kilonova’). The discovery of an electromagnetic counterpart to the gravitational-wave source13 GW170817 represents the first opportunity to detect and scrutinize a sample of freshly synthesized r-process elements14,15,16,17,18. Here we report models that predict the electromagnetic emission of kilonovae in detail and enable the mass, velocity and composition of ejecta to be derived from observations. We compare the models to the optical and infrared radiation associated with the GW170817 event to argue that the observed source is a kilonova. We infer the presence of two distinct components of ejecta, one composed primarily of light (atomic mass number less than 140) and one of heavy (atomic mass number greater than 140) r-process elements. The ejected mass and a merger rate inferred from GW170817 imply that such mergers are a dominant mode of r-process production in the Universe.

932 citations