scispace - formally typeset
Search or ask a question
Author

Ravi R. Mazumdar

Bio: Ravi R. Mazumdar is an academic researcher from University of Waterloo. The author has contributed to research in topics: Wireless network & Scheduling (computing). The author has an hindex of 32, co-authored 178 publications receiving 5871 citations. Previous affiliations of Ravi R. Mazumdar include National Aerospace Laboratory & Université du Québec.


Papers
More filters
Journal ArticleDOI
TL;DR: A game theoretic framework for bandwidth allocation for elastic services in high-speed networks based on the Nash bargaining solution from cooperative game theory that can be used to characterize a rate allocation and a pricing policy which takes into account users' budget in a fair way.
Abstract: In this paper, we present a game theoretic framework for bandwidth allocation for elastic services in high-speed networks. The framework is based on the idea of the Nash bargaining solution from cooperative game theory, which not only provides the rate settings of users that are Pareto optimal from the point of view of the whole system, but are also consistent with the fairness axioms of game theory. We first consider the centralized problem and then show that this procedure can be decentralized so that greedy optimization by users yields the system optimal bandwidth allocations. We propose a distributed algorithm for implementing the optimal and fair bandwidth allocation and provide conditions for its convergence. The paper concludes with the pricing of elastic connections based on users' bandwidth requirements and users' budget. We show that the above bargaining framework can be used to characterize a rate allocation and a pricing policy which takes into account users' budget in a fair way and such that the total network revenue is maximized.

728 citations

Journal ArticleDOI
TL;DR: A heterogeneous sensor network in which nodes are to be deployed over a unit area for the purpose of surveillance is considered, finding optimum node intensities and node energies that guarantee a lifetime of at least T units, while ensuring connectivity and coverage of the surveillance area with a high probability.
Abstract: We consider a heterogeneous sensor network in which nodes are to be deployed over a unit area for the purpose of surveillance. An aircraft visits the area periodically and gathers data about the activity in the area from the sensor nodes. There are two types of nodes that are distributed over the area using two-dimensional homogeneous Poisson point processes; type 0 nodes with intensity (average number per unit area) /spl lambda//sub 0/ and battery energy E/sub 0/; and type 1 nodes with intensity /spl lambda//sub 1/ and battery energy E/sub 1/. Type 0 nodes do the sensing while type 1 nodes act as the cluster heads besides doing the sensing. Nodes use multihopping to communicate with their closest cluster heads. We determine them optimum node intensities (/spl lambda//sub 0/, /spl lambda//sub 1/) and node energies (E/sub 0/, E/sub 1/) that guarantee a lifetime of at least T units, while ensuring connectivity and coverage of the surveillance area with a high probability. We minimize the overall cost of the network under these constraints. Lifetime is defined as the number of successful data gathering trips (or cycles) that are possible until connectivity and/or coverage are lost. Conditions for a sharp cutoff are also taken into account, i.e., we ensure that almost all the nodes run out of energy at about the same time so that there is very little energy waste due to residual energy. We compare the results for random deployment with those of a grid deployment in which nodes are placed deterministically along grid points. We observe that in both cases /spl lambda//sub 1/ scales approximately as /spl radic/(/spl lambda//sub 0/). Our results can be directly extended to take into account unreliable nodes.

486 citations

Proceedings ArticleDOI
29 Sep 2006
TL;DR: It is shown that under a setting with single-hop traffic and no rate control, the maximal scheduling policy can achieve a constant fraction of the capacity region for networks whose connectivity graph can be represented using one of the above classes of graphs.
Abstract: We consider the problem of throughput-optimal scheduling in wireless networks subject to interference constraints. We model the interference using a family of K -hop interference models. We define a K-hop interference model as one for which no two links within K hops can successfully transmit at the same time (Note that IEEE 802.11 DCF corresponds to a 2-hop interference model.) .For a given K, a throughput-optimal scheduler needs to solve a maximum weighted matching problem subject to the K-hop interference constraints. For K=1, the resulting problem is the classical Maximum Weighted Matching problem, that can be solved in polynomial time. However, we show that for K>1,the resulting problems are NP-Hard and cannot be approximated within a factor that grows polynomially with the number of nodes. Interestingly, we show that for specific kinds of graphs, that can be used to model the underlying connectivity graph of a wide range of wireless networks, the resulting problems admit polynomial time approximation schemes. We also show that a simple greedy matching algorithm provides a constant factor approximation to the scheduling problem for all K in this case. We then show that under a setting with single-hop traffic and no rate control, the maximal scheduling policy considered in recent related works can achieve a constant fraction of the capacity region for networks whose connectivity graph can be represented using one of the above classes of graphs. These results are encouraging as they suggest that one can develop distributed algorithms to achieve near optimal throughput in case of a wide range of wireless networks.

398 citations

Journal ArticleDOI
TL;DR: The nature of delay-capacity trade-off is related to the nature of node motion, thereby providing a better understanding of the delay- capacity relationship in ad hoc networks in comparison to earlier works.
Abstract: Since the original work of Grossglauser and Tse, which showed that mobility can increase the capacity of an ad hoc network, there has been a lot of interest in characterizing the delay-capacity relationship in ad hoc networks. Various mobility models have been studied in the literature, and the delay-capacity relationships under those models have been characterized. The results indicate that there are trade-offs between the delay and capacity, and that the nature of these trade-offs is strongly influenced by the choice of the mobility model. Some questions that arise are: (i) How representative are these mobility models studied in the literature? (ii) Can the delay-capacity relationship be significantly different under some other "reasonable" mobility model? (iii) What sort of delay-capacity trade-off are we likely to see in a real world scenario? In this paper, we take the first step toward answering some of these questions. In particular, we analyze, among others, the mobility models studied in recent related works, under a unified framework. We relate the nature of delay-capacity trade-off to the nature of node motion, thereby providing a better understanding of the delay-capacity relationship in ad hoc networks in comparison to earlier works.

344 citations

Journal ArticleDOI
TL;DR: This paper investigates the problem of distributively allocating transmission data rates to users in the Internet and shows that a pricing-based mechanism that solves the dual formulation can be developed based on the theory of subdifferentials with the property that the prices "self-regulate" the users to access the resources based onThe net utility.
Abstract: In this paper, we investigate the problem of distributively allocating transmission data rates to users in the Internet. We allow users to have concave as well as sigmoidal utility functions as appropriate for different applications. In the literature, for simplicity, most works have dealt only with the concave utility function. However, we show that applying rate control algorithms developed for concave utility functions in a more realistic setting (with both concave and sigmoidal types of utility functions) could lead to instability and high network congestion. We show that a pricing-based mechanism that solves the dual formulation can be developed based on the theory of subdifferentials with the property that the prices "self-regulate" the users to access the resources based on the net utility. We discuss convergence issues and show that an algorithm can be developed that is efficient in the sense of achieving the global optimum when there are many users.

286 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

Journal ArticleDOI
TL;DR: This paper analyses the stability and fairness of two classes of rate control algorithm for communication networks, which provide natural generalisations to large-scale networks of simple additive increase/multiplicative decrease schemes, and are shown to be stable about a system optimum characterised by a proportional fairness criterion.
Abstract: This paper analyses the stability and fairness of two classes of rate control algorithm for communication networks. The algorithms provide natural generalisations to large-scale networks of simple additive increase/multiplicative decrease schemes, and are shown to be stable about a system optimum characterised by a proportional fairness criterion. Stability is established by showing that, with an appropriate formulation of the overall optimisation problem, the network's implicit objective function provides a Lyapunov function for the dynamical system defined by the rate control algorithm. The network's optimisation problem may be cast in primal or dual form: this leads naturally to two classes of algorithm, which may be interpreted in terms of either congestion indication feedback signals or explicit rates based on shadow prices. Both classes of algorithm may be generalised to include routing control, and provide natural implementations of proportionally fair pricing.

5,566 citations

Journal ArticleDOI
TL;DR: A survey of current continuous nonlinear multi-objective optimization concepts and methods finds that no single approach is superior and depends on the type of information provided in the problem, the user's preferences, the solution requirements, and the availability of software.
Abstract: A survey of current continuous nonlinear multi-objective optimization (MOO) concepts and methods is presented. It consolidates and relates seemingly different terminology and methods. The methods are divided into three major categories: methods with a priori articulation of preferences, methods with a posteriori articulation of preferences, and methods with no articulation of preferences. Genetic algorithms are surveyed as well. Commentary is provided on three fronts, concerning the advantages and pitfalls of individual methods, the different classes of methods, and the field of MOO as a whole. The Characteristics of the most significant methods are summarized. Conclusions are drawn that reflect often-neglected ideas and applicability to engineering problems. It is found that no single approach is superior. Rather, the selection of a specific method depends on the type of information that is provided in the problem, the user’s preferences, the solution requirements, and the availability of software.

4,263 citations

Book ChapterDOI
01 Jan 2011
TL;DR: Weakconvergence methods in metric spaces were studied in this article, with applications sufficient to show their power and utility, and the results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables.
Abstract: The author's preface gives an outline: "This book is about weakconvergence methods in metric spaces, with applications sufficient to show their power and utility. The Introduction motivates the definitions and indicates how the theory will yield solutions to problems arising outside it. Chapter 1 sets out the basic general theorems, which are then specialized in Chapter 2 to the space C[0, l ] of continuous functions on the unit interval and in Chapter 3 to the space D [0, 1 ] of functions with discontinuities of the first kind. The results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables. " The book develops and expands on Donsker's 1951 and 1952 papers on the invariance principle and empirical distributions. The basic random variables remain real-valued although, of course, measures on C[0, l ] and D[0, l ] are vitally used. Within this framework, there are various possibilities for a different and apparently better treatment of the material. More of the general theory of weak convergence of probabilities on separable metric spaces would be useful. Metrizability of the convergence is not brought up until late in the Appendix. The close relation of the Prokhorov metric and a metric for convergence in probability is (hence) not mentioned (see V. Strassen, Ann. Math. Statist. 36 (1965), 423-439; the reviewer, ibid. 39 (1968), 1563-1572). This relation would illuminate and organize such results as Theorems 4.1, 4.2 and 4.4 which give isolated, ad hoc connections between weak convergence of measures and nearness in probability. In the middle of p. 16, it should be noted that C*(S) consists of signed measures which need only be finitely additive if 5 is not compact. On p. 239, where the author twice speaks of separable subsets having nonmeasurable cardinal, he means "discrete" rather than "separable." Theorem 1.4 is Ulam's theorem that a Borel probability on a complete separable metric space is tight. Theorem 1 of Appendix 3 weakens completeness to topological completeness. After mentioning that probabilities on the rationals are tight, the author says it is an

3,554 citations

Proceedings Article
01 Jan 1991
TL;DR: It is concluded that properly augmented and power-controlled multiple-cell CDMA (code division multiple access) promises a quantum increase in current cellular capacity.
Abstract: It is shown that, particularly for terrestrial cellular telephony, the interference-suppression feature of CDMA (code division multiple access) can result in a many-fold increase in capacity over analog and even over competing digital techniques. A single-cell system, such as a hubbed satellite network, is addressed, and the basic expression for capacity is developed. The corresponding expressions for a multiple-cell system are derived. and the distribution on the number of users supportable per cell is determined. It is concluded that properly augmented and power-controlled multiple-cell CDMA promises a quantum increase in current cellular capacity. >

2,951 citations