scispace - formally typeset
Search or ask a question
Author

Samir R. Das

Bio: Samir R. Das is an academic researcher from Stony Brook University. The author has contributed to research in topics: Wireless network & Physics. The author has an hindex of 58, co-authored 186 publications receiving 29007 citations. Previous affiliations of Samir R. Das include University of Texas at San Antonio & University of Cincinnati.


Papers
More filters
Proceedings ArticleDOI
04 Oct 2017
TL;DR: This work presents a novel network architecture based on steerable links and sufficiently many robust short-range links, to help circumvent the key challenge of outdoor effects in reliable operation of outdoor FSO links in picocell backhaul.
Abstract: Expected increase in cellular demand has pushed recent interest in picocell networks which have reduced cell sizes (100-200m or less). For ease of deployment of such networks, a wireless backhaul network is highly desired. Since RF-based technologies are unlikely to provide the desired multi-gigabit data rates, we motivate and explore use of free space optics (FSO) for picocell backhaul. In particular, we present a novel network architecture based on steerable links and sufficiently many robust short-range links, to help circumvent the key challenge of outdoor effects in reliable operation of outdoor FSO links. Our architecture is motivated by the fact that, due to the high density of picocells, many short-range links will occur naturally in a picocell backhaul. Moreover, use of steerable FSO links facilitates networks with sufficient redundancy while using only a small number of interfaces per node. We address the key problems that arise in the context of such a backhaul architecture, viz., an FSO link design with desired characteristics, and related network design and management problems. We develop and evaluate a robust 100m FSO link prototype, and simulate the proposed architecture in many metro US cities while show its viability via evaluation of key performance metrics.

27 citations

Proceedings ArticleDOI
05 Nov 2014
TL;DR: The first systematic measurement study to shed light on this emerging phenomenon of mobile virtual network operators or MVNOs that operate on top of existing cellular infrastructures in the US is presented.
Abstract: Recent industry trends suggest a new phenomenon in the mobile market: mobile virtual network operators or MVNOs that operate on top of existing cellular infrastructures. While MVNOs have shown significant growth in the US and elsewhere in the past two years and have been successful in attracting customers, there is anecdotal evidence that users are concerned about cellular performance when choosing MVNOs over traditional cellular operators. In this paper, we present the first systematic measurement study to shed light on this emerging phenomenon. We study the performance of 3 key applications: web access, video streaming and voice, in 2 popular MVNO families (a total of 8 carriers) in the US, where each MVNO family consists of a major base carrier and 3 MVNOs running on top of it. We observe that some MVNOs do indeed exhibit significant performance degradation and that there are key differences between the two MVNO families.

26 citations

Journal ArticleDOI
TL;DR: Results indicate that contrary to the common belief, memory usage by Time Warp can be controlled within reasonable limits without any significant loss of performance.
Abstract: The performance of the Time Warp mechanism is experimentally evaluated when only a limited amount of memory is available to the parallel computation. An implementation of the cancelback protocol is used for memory management on a shared memory architecture, viz., KSR to evaluate the performance vs. memory tradeoff. The implementation of the cancelback protocol supports canceling back more than one memory object when memory has been exhausted (the precise number is referred to as the salvage parameter) and incorporates a non-work-conserving processor scheduling technique to prevent starvation. Several synthetic and benchmark programs are used that provide interesting stress cases for evaluating the limited memory behavior. The experiments are extensively monitored to determine the extent to which various factors may affect performance. Several observations are made by analyzing the behavior of Time Warp under limited memory: (1) Depending on the available memory and asymmetry in the workload, canceling back several memory objects at one time (i.e. a salvage parameter value of more than one) improves performance significantly, by reducing certain overheads. However, performance is relatively insensitive to the salvage parameter except at extreme values. (2) The speedup vs. memory curve for Time Warp programs has a well-defined knee before which speedup increases very rapidly with memory and beyond which there is little performance gain with increased memory. (3) A performance nearly equivalent to that with large amounts of memory can be achieved with only a modest amount of additional memory beyond that required for sequential execution, if memory management overheads are small compared to the event granularity. These results indicate that contrary to the common belief, memory usage by Time Warp can be controlled within reasonable limits without any significant loss of performance.

25 citations

Proceedings ArticleDOI
01 Jun 1992
TL;DR: A discrete state, continuous time Markov chain model for Time Warp augmented with the cancelback protocol is developed for a shared memory system with n homogeneous processors and homogeneous workload and it is observed that Time Warp with only a few additional message buffers per processor over that required in the corresponding sequential execution can achieve the same or even greater performance.
Abstract: The behavior of n interacting processes synchronized by the “Time Warp” rollback mechanism is analyzed under the constraint that the total amount of memory to execute the program is limited. In Time Warp, a protocol called “cancelback” has been proposed to reclaim storage when the system runs out of memory. A discrete state, continuous time Markov chain model for Time Warp augmented with the cancelback protocol is developed for a shared memory system with n homogeneous processors and homogeneous workload. The model allows one to predict speedup as the amount of available memory is varied. To our knowledge, this is the first model to achieve this result. The performance predicted by the model is validated through direct performance measurements on an operational Time Warp system executing on a shared-memory multiprocessor using a workload similar to that in the model. It is observed that Time Warp with only a few additional message buffers per processor over that required in the corresponding sequential execution can achieve approximately the same or even greater performance than Time Warp with unlimited memory, if GVT computation and fossil collection can be efficiently implemented.

21 citations

Proceedings ArticleDOI
16 Apr 2018
TL;DR: This work uses a crowdsourcing approach for RF spectrum patrolling, where heterogeneous, low-cost spectrum sensors are deployed widely and are tasked with detecting unauthorized transmissions in a collaborative fashion while consuming only a limited amount of resources.
Abstract: We use a crowdsourcing approach for RF spectrum patrolling, where heterogeneous, low-cost spectrum sensors are deployed widely and are tasked with detecting unauthorized transmissions in a collaborative fashion while consuming only a limited amount of resources. We pose this as a collaborative signal detection problem where the individual sensor's detection performance may vary widely based on their respective hardware or software configurations, but are hard to model using traditional approaches. Still an optimal subset of sensors and their configurations must be chosen to maximize the overall detection performance subject to given resource (cost) limitations. We present the challenges of this problem in crowdsourced settings and present a set of methods to address them. The proposed methods use data-driven approaches to model individual sensors and develops mechanisms for sensor selection and fusion while accounting for their correlated nature. We present performance results using examples of commodity-based spectrum sensors and show significant improvements relative to baseline approaches.

20 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This work develops and analyzes low-energy adaptive clustering hierarchy (LEACH), a protocol architecture for microsensor networks that combines the ideas of energy-efficient cluster-based routing and media access together with application-specific data aggregation to achieve good performance in terms of system lifetime, latency, and application-perceived quality.
Abstract: Networking together hundreds or thousands of cheap microsensor nodes allows users to accurately monitor a remote environment by intelligently combining the data from the individual nodes. These networks require robust wireless communication protocols that are energy efficient and provide low latency. We develop and analyze low-energy adaptive clustering hierarchy (LEACH), a protocol architecture for microsensor networks that combines the ideas of energy-efficient cluster-based routing and media access together with application-specific data aggregation to achieve good performance in terms of system lifetime, latency, and application-perceived quality. LEACH includes a new, distributed cluster formation technique that enables self-organization of large numbers of nodes, algorithms for adapting clusters and rotating cluster head positions to evenly distribute the energy load among all the nodes, and techniques to enable distributed signal processing to save communication resources. Our results show that LEACH can improve system lifetime by an order of magnitude compared with general-purpose multihop approaches.

10,296 citations

Journal ArticleDOI

6,278 citations

Proceedings ArticleDOI
01 Aug 2000
TL;DR: This paper explores and evaluates the use of directed diffusion for a simple remote-surveillance sensor network and its implications for sensing, communication and computation.
Abstract: Advances in processor, memory and radio technology will enable small and cheap nodes capable of sensing, communication and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed diffusion paradigm for such coordination. Directed diffusion is datacentric in that all communication is for named data. All nodes in a directed diffusion-based network are application-aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network. We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network.

6,061 citations

Amin Vahdat1
01 Jan 2000
TL;DR: This work introduces Epidemic Routing, where random pair-wise exchanges of messages among mobile hosts ensure eventual message delivery and achieves eventual delivery of 100% of messages with reasonable aggregate resource consumption in a number of interesting scenarios.
Abstract: Mobile ad hoc routing protocols allow nodes with wireless adaptors to communicate with one another without any pre-existing network infrastructure. Existing ad hoc routing protocols, while robust to rapidly changing network topology, assume the presence of a connected path from source to destination. Given power limitations, the advent of short-range wireless networks, and the wide physical conditions over which ad hoc networks must be deployed, in some scenarios it is likely that this assumption is invalid. In this work, we develop techniques to deliver messages in the case where there is never a connected path from source to destination or when a network partition exists at the time a message is originated. To this end, we introduce Epidemic Routing, where random pair-wise exchanges of messages among mobile hosts ensure eventual message delivery. The goals of Epidemic Routing are to: i) maximize message delivery rate, ii) minimize message latency, and iii) minimize the total resources consumed in message delivery. Through an implementation in the Monarch simulator, we show that Epidemic Routing achieves eventual delivery of 100% of messages with reasonable aggregate resource consumption in a number of interesting scenarios.

4,355 citations

Journal ArticleDOI
TL;DR: This paper presents a detailed study on recent advances and open research issues in WMNs, followed by discussing the critical factors influencing protocol design and exploring the state-of-the-art protocols for WMNs.

4,205 citations