scispace - formally typeset
Search or ask a question
Author

Sanjeev Setia

Bio: Sanjeev Setia is an academic researcher from George Mason University. The author has contributed to research in topics: Wireless sensor network & Key distribution in wireless sensor networks. The author has an hindex of 33, co-authored 66 publications receiving 5600 citations. Previous affiliations of Sanjeev Setia include University of Maryland, College Park.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the average response time for disk read requests when the periodic update write policy is used is determined, and design criteria that can be used to decide among competing cache write policies.
Abstract: A disk cache is typically used in file systems to reduce average access time for data storage and retrieval. The 'periodic update' write policy, widely used in existing computer systems, is one in which dirty cache blocks are written to a disk on a periodic basis. The average response time for disk read requests when the periodic update write policy is used is determined. Read and write load, cache-hit ratio, and the disk scheduler's ability to reduce service time under load are incorporated in the analysis, leading to design criteria that can be used to decide among competing cache write policies. The main conclusion is that the bulk arrivals generated by the periodic update policy cause a traffic jam effect which results in severely degraded service. Effective use of the disk cache and disk scheduling can alleviate this problem, but only under a narrow range of operating conditions. Based on this conclusion, alternate write packages that retain the periodic update policy's advantages and provide uniformly better service are proposed. >

48 citations

Proceedings ArticleDOI
03 Aug 1999
TL;DR: The design and implementation of Dodo is presented, an efficient user-level system for harvesting idle memory in off-the-shelf clusters of workstations that requires no modifications to the operating system and/or processor firmware and is hence portable to multiple platforms.
Abstract: In this paper, we present the design and implementation of Dodo, an efficient user-level system for harvesting idle memory in off-the-shelf clusters of workstations. Dodo enables data-intensive applications to use remote memory in a cluster as an intermediate cache between local memory and disk. It requires no modifications to the operating system and/or processor firmware and is hence portable to multiple platforms. Further, the memory recruitment policy used by Dodo is designed to minimize any delays experienced by the owner of desktop machines whose memory is harvested by Dodo. Our implementation of Dodo is operational and currently runs on Linux 2.0.35. For communication, Dodo can use either UDP/IP or U-Net, the low-latency user-level network architecture developed by von Eicken et al. (1995). We evaluated the performance improvements that can be achieved by using Dodo for two real applications and three synthetic benchmarks. Our results show that speedups obtained for an application are highly dependent on its I/O access pattern and data set sizes. Significant speedups (between 2 and 3) were obtained for applications whose working sets are larger than the local memory on a workstation but smaller than aggregate memory available on the cluster and for applications that can benefit from the zero-seek nature of remote memory.

48 citations

Journal ArticleDOI
01 Sep 2002
TL;DR: This paper presents a new scalable and reliable key distribution protocol for group key management schemes that use logical key hierarchies (LKH) for scalable group rekeying and shows that for most network loss scenarios, the bandwidth used by WKA-BKR is lower than that of the other protocols.
Abstract: In this paper, we present a new scalable and reliable key distribution protocol for group key management schemes that use logical key hierarchies (LKH) for scalable group rekeying. Our protocol called WKA-BKR is based upon two ideas--weighted key assignment and batched key retransmission--both of which exploit the special properties of LKH and the group rekey transport payload to reduce the bandwidth overhead of the reliable key delivery protocol. Using both analytic modeling and simulation, we compare the performance of WKA-BKR with that of other rekey transport protocols, including a recently proposed protocol based on proactive FEC. Our results show that for most network loss scenarios, the bandwidth used by WKA-BKR is lower than that of the other protocols.

47 citations

01 Jul 1990
TL;DR: In this article, the average response time for disk read requests when the periodic update write policy is used is determined, and design criteria that can be used to decide among competing cache write policies.
Abstract: A disk cache is typically used in file systems to reduce average access time for data storage and retrieval. The 'periodic update' write policy, widely used in existing computer systems, is one in which dirty cache blocks are written to a disk on a periodic basis. The average response time for disk read requests when the periodic update write policy is used is determined. Read and write load, cache-hit ratio, and the disk scheduler's ability to reduce service time under load are incorporated in the analysis, leading to design criteria that can be used to decide among competing cache write policies. The main conclusion is that the bulk arrivals generated by the periodic update policy cause a traffic jam effect which results in severely degraded service. Effective use of the disk cache and disk scheduling can alleviate this problem, but only under a narrow range of operating conditions. Based on this conclusion, alternate write packages that retain the periodic update policy's advantages and provide uniformly better service are proposed. >

46 citations

Journal ArticleDOI
TL;DR: A new algorithm for replica control is presented that can be tailored (through a design parameter) to achieve the desired balance between low message overhead and high data availability and can achieve asymptotically high resiliency.
Abstract: We examine the tradeoff between message overhead and data availability that arises in the design of fault-tolerant algorithms for replicated data management in distributed systems. We propose a property called asymptotically high resiliency which is useful for evaluating the fault-tolerance of replica control algorithms and distributed mutual exclusion algorithms. We present a new algorithm for replica control that can be tailored (through a design parameter) to achieve the desired balance between low message overhead and high data availability. Further, we show that for a message overhead of O(/spl radic/(Nlog N)), our algorithm can achieve asymptotically high resiliency.

46 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the authors present a cloud centric vision for worldwide implementation of Internet of Things (IoT) and present a Cloud implementation using Aneka, which is based on interaction of private and public Clouds, and conclude their IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.

9,593 citations

Journal ArticleDOI
01 May 1975
TL;DR: The Fundamentals of Queueing Theory, Fourth Edition as discussed by the authors provides a comprehensive overview of simple and more advanced queuing models, with a self-contained presentation of key concepts and formulae.
Abstract: Praise for the Third Edition: "This is one of the best books available. Its excellent organizational structure allows quick reference to specific models and its clear presentation . . . solidifies the understanding of the concepts being presented."IIE Transactions on Operations EngineeringThoroughly revised and expanded to reflect the latest developments in the field, Fundamentals of Queueing Theory, Fourth Edition continues to present the basic statistical principles that are necessary to analyze the probabilistic nature of queues. Rather than presenting a narrow focus on the subject, this update illustrates the wide-reaching, fundamental concepts in queueing theory and its applications to diverse areas such as computer science, engineering, business, and operations research.This update takes a numerical approach to understanding and making probable estimations relating to queues, with a comprehensive outline of simple and more advanced queueing models. Newly featured topics of the Fourth Edition include:Retrial queuesApproximations for queueing networksNumerical inversion of transformsDetermining the appropriate number of servers to balance quality and cost of serviceEach chapter provides a self-contained presentation of key concepts and formulae, allowing readers to work with each section independently, while a summary table at the end of the book outlines the types of queues that have been discussed and their results. In addition, two new appendices have been added, discussing transforms and generating functions as well as the fundamentals of differential and difference equations. New examples are now included along with problems that incorporate QtsPlus software, which is freely available via the book's related Web site.With its accessible style and wealth of real-world examples, Fundamentals of Queueing Theory, Fourth Edition is an ideal book for courses on queueing theory at the upper-undergraduate and graduate levels. It is also a valuable resource for researchers and practitioners who analyze congestion in the fields of telecommunications, transportation, aviation, and management science.

2,562 citations

Journal ArticleDOI
TL;DR: The fast progress of research on energy efficiency, networking, data management and security in wireless sensor networks, and the need to compare with the solutions adopted in the standards motivates the need for a survey on this field.

1,708 citations

Proceedings ArticleDOI
27 Oct 2003
TL;DR: The Localized Encryption and Authentication Protocol (LEAP) as discussed by the authors is a key management protocol for sensor networks that is designed to support in-network processing, while at the same time restricting the security impact of a node compromise to the immediate network neighborhood of the compromised node.
Abstract: In this paper, we describe LEAP (Localized Encryption and Authentication Protocol), a key management protocol for sensor networks that is designed to support in-network processing, while at the same time restricting the security impact of a node compromise to the immediate network neighborhood of the compromised node. The design of the protocol is motivated by the observation that different types of messages exchanged between sensor nodes have different security requirements, and that a single keying mechanism is not suitable for meeting these different security requirements. LEAP supports the establishment of four types of keys for each sensor node -- an individual key shared with the base station, a pairwise key shared with another sensor node, a cluster key shared with multiple neighboring nodes, and a group key that is shared by all the nodes in the network. The protocol used for establishing and updating these keys is communication- and energy-efficient, and minimizes the involvement of the base station. LEAP also includes an efficient protocol for inter-node traffic authentication based on the use of one-way key chains. A salient feature of the authentication protocol is that it supports source authentication without precluding in-network processing and passive participation. We analyze the performance and the security of our scheme under various attack models and show our schemes are very efficient in defending against many attacks.

1,097 citations

Journal ArticleDOI
TL;DR: This paper compares security issues between IoT and traditional network, and discusses opening security issues of IoT, and analyzes the cross-layer heterogeneous integration issues and security issues in detail and discusses the security issues as a whole.
Abstract: Internet of Things (IoT) is playing a more and more important role after its showing up, it covers from traditional equipment to general household objects such as WSNs and RFID. With the great potential of IoT, there come all kinds of challenges. This paper focuses on the security problems among all other challenges. As IoT is built on the basis of the Internet, security problems of the Internet will also show up in IoT. And as IoT contains three layers: perception layer, transportation layer and application layer, this paper will analyze the security problems of each layer separately and try to find new problems and solutions. This paper also analyzes the cross-layer heterogeneous integration issues and security issues in detail and discusses the security issues of IoT as a whole and tries to find solutions to them. In the end, this paper compares security issues between IoT and traditional network, and discusses opening security issues of IoT.

1,060 citations