scispace - formally typeset
Search or ask a question
Author

Samir R. Das

Bio: Samir R. Das is an academic researcher from Stony Brook University. The author has contributed to research in topics: Wireless network & Physics. The author has an hindex of 58, co-authored 186 publications receiving 29007 citations. Previous affiliations of Samir R. Das include University of Texas at San Antonio & University of Cincinnati.


Papers
More filters
Journal ArticleDOI
TL;DR: A research agenda for developing protocols and algorithms for densely populated RFID based systems covering a wide geographic area that will need multiple readers collaborating to read RFID tag data and shows how multiple antennas in a reader can be used to improve accuracy and access rates by utilizing antenna diversity.
Abstract: In this article, we outline a research agenda for developing protocols and algorithms for densely populated RFID based systems covering a wide geographic area. This will need multiple readers collaborating to read RFID tag data. We consider cases where the tag data is used for identification, or for sensing environmental parameters. We address performance issues related to 'accuracy' and 'efficiency' in such systems by exploiting 'diversity' and 'redundancy'. We discuss how tag multiplicity can be used to improve accuracy. In a similar fashion, we explore how reader diversity, achieved by using multiple readers with potentially partially overlapping coverage areas, can be exploited to improve accuracy and efficiency. Finally, we show how multiple antennas in a reader can be used to improve accuracy and access rates by utilizing antenna diversity. RFID tag/sensor data can be highly redundant for the purpose of answering a higher level query. For example, often the higher level query needs to compute a statistic or a function on the sensory data obtained by the RFID sensors, and does not need all the individual sensor readings. We outline the need for efficient tag-to-reader communication, and reader-to-reader coordination to effectively compute such functions with low overhead.

50 citations

Proceedings ArticleDOI
01 May 1994
TL;DR: In this paper, an adaptive mechanism based on the Cancelback memory management protocol is proposed to dynamically control the amount of memory used in the simulation in order to maximize performance, which is based on a model that characterizes the behavior of Time Warp programs in terms of the flow of memory buffers among different buffer pools.
Abstract: It is widely believed that Time Warp is prone to two potential problems: an excessive amount of wasted, rolled back computation resulting from “rollback thrashing” behaviors, and inefficient use of memory, leading to poor performance of virtual memory and/or multiprocessor cache systems. An adaptive mechanism is proposed based on the Cancelback memory management protocol that dynamically controls the amount of memory used in the simulation in order to maximize performance. The proposed mechanism is adaptive in the sense that it monitors the execution of the Time Warp program, automatically adjusts the amount of memory used to reduce Time Warp overheads (fossil collection, Cancelback, the amount of rolled back computation, etc.) to a manageable level. The mechanism is based on a model that characterizes the behavior of Time Warp programs in terms of the flow of memory buffers among different buffer pools. We demonstrate that an implementation of the adaptive mechanism on a Kendall Square Research KSR-1 multiprocessor is effective in automatically maximizing performance while minimizing memory utilization of Time Warp programs, even for dynamically changing simulation models.

48 citations

Proceedings ArticleDOI
12 Dec 2005
TL;DR: This paper considers the problem of choosing a minimum subset of sensors such that they maintain a required degree of coverage and also form a connected network with a required level of fault tolerance, and proposes a distributed and localized Voronoi- based algorithm to address this problem.
Abstract: Sensor networks are often deployed in a redundant fashion. In order to prolong the network lifetime, it is desired to choose only a subset of sensors to keep active and put the rest to sleep. In order to provide fault tolerance, this small subset of active sensors should also provide some degree of redundancy. In this paper, we consider the problem of choosing a minimum subset of sensors such that they maintain a required degree of coverage and also form a connected network with a required degree of fault tolerance. In addition, we consider a more general, variable radii sensor model, wherein every sensor can adjust both its sensing and transmission ranges to minimize overall energy consumption in the network. We call this the variable radiik1-Connected, k2-Cover problem. To address this problem, we propose a distributed and localized Voronoi- based algorithm. The approach extends the relative neighborhood graph (RNG) structure to preserve k-connectivity in a graph, and design a distributed technique to inactivate desirable nodes while preserving k-connectivity of the remaining active nodes. We show through extensive simulations that our proposed techniques result in overall energy savings in random sensor networks over a wide range of experimental parameters.

48 citations

Journal ArticleDOI
TL;DR: The dead reckoning-based location service mechanism is evaluated against three known location dissemination service protocols: simple, distance routing effect algorithm for mobility (DREAM) and geographic region summary service (GRSS) and it is observed that dead reckoning significantly outperforms the other protocols in terms of packet delivery fraction.
Abstract: A predictive model-based mobility tracking method, called dead reckoning, is developed for mobile ad hoc networks It disseminates both location and movement models of mobile nodes in the network so that every node is able to predict or track the movement of every other node with a very low overhead The basic technique is optimized to use ‘distance effect’, where distant nodes maintain less accurate tracking information to save overheads The dead reckoning-based location service mechanism is evaluated against three known location dissemination service protocols: simple, distance routing effect algorithm for mobility (DREAM) and geographic region summary service (GRSS) The evaluation is done with geographic routing as an application It is observed that dead reckoning significantly outperforms the other protocols in terms of packet delivery fraction It also maintains low-control overhead Its packet delivery performance is only marginally impacted by increasing speed or noise in the mobility model, that affects its predictive ability Copyright © 2004 John Wiley & Sons, Ltd

48 citations

Book ChapterDOI
01 Jan 2005
TL;DR: Various proposed approaches for routing in mobile ad hoc networks such as flooding, proactive, on-demand and geographic routing are surveyed, and representative protocols from each of these categories are reviewed.
Abstract: Efficient, dynamic routing is one of the key challenges in mobile ad hoc networks. In the recent past, this problem was addressed by many research efforts, resulting in a large body of literature. We survey various proposed approaches for routing in mobile ad hoc networks such as flooding, proactive, on-demand and geographic routing, and review representative protocols from each of these categories. We further conduct qualitative comparisons across various approaches. We also point out future research issues in the context of individual routing approaches as well as from the overall system perspective.

45 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This work develops and analyzes low-energy adaptive clustering hierarchy (LEACH), a protocol architecture for microsensor networks that combines the ideas of energy-efficient cluster-based routing and media access together with application-specific data aggregation to achieve good performance in terms of system lifetime, latency, and application-perceived quality.
Abstract: Networking together hundreds or thousands of cheap microsensor nodes allows users to accurately monitor a remote environment by intelligently combining the data from the individual nodes. These networks require robust wireless communication protocols that are energy efficient and provide low latency. We develop and analyze low-energy adaptive clustering hierarchy (LEACH), a protocol architecture for microsensor networks that combines the ideas of energy-efficient cluster-based routing and media access together with application-specific data aggregation to achieve good performance in terms of system lifetime, latency, and application-perceived quality. LEACH includes a new, distributed cluster formation technique that enables self-organization of large numbers of nodes, algorithms for adapting clusters and rotating cluster head positions to evenly distribute the energy load among all the nodes, and techniques to enable distributed signal processing to save communication resources. Our results show that LEACH can improve system lifetime by an order of magnitude compared with general-purpose multihop approaches.

10,296 citations

Journal ArticleDOI

6,278 citations

Proceedings ArticleDOI
01 Aug 2000
TL;DR: This paper explores and evaluates the use of directed diffusion for a simple remote-surveillance sensor network and its implications for sensing, communication and computation.
Abstract: Advances in processor, memory and radio technology will enable small and cheap nodes capable of sensing, communication and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed diffusion paradigm for such coordination. Directed diffusion is datacentric in that all communication is for named data. All nodes in a directed diffusion-based network are application-aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network. We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network.

6,061 citations

Amin Vahdat1
01 Jan 2000
TL;DR: This work introduces Epidemic Routing, where random pair-wise exchanges of messages among mobile hosts ensure eventual message delivery and achieves eventual delivery of 100% of messages with reasonable aggregate resource consumption in a number of interesting scenarios.
Abstract: Mobile ad hoc routing protocols allow nodes with wireless adaptors to communicate with one another without any pre-existing network infrastructure. Existing ad hoc routing protocols, while robust to rapidly changing network topology, assume the presence of a connected path from source to destination. Given power limitations, the advent of short-range wireless networks, and the wide physical conditions over which ad hoc networks must be deployed, in some scenarios it is likely that this assumption is invalid. In this work, we develop techniques to deliver messages in the case where there is never a connected path from source to destination or when a network partition exists at the time a message is originated. To this end, we introduce Epidemic Routing, where random pair-wise exchanges of messages among mobile hosts ensure eventual message delivery. The goals of Epidemic Routing are to: i) maximize message delivery rate, ii) minimize message latency, and iii) minimize the total resources consumed in message delivery. Through an implementation in the Monarch simulator, we show that Epidemic Routing achieves eventual delivery of 100% of messages with reasonable aggregate resource consumption in a number of interesting scenarios.

4,355 citations

Journal ArticleDOI
TL;DR: This paper presents a detailed study on recent advances and open research issues in WMNs, followed by discussing the critical factors influencing protocol design and exploring the state-of-the-art protocols for WMNs.

4,205 citations