scispace - formally typeset
Search or ask a question

Showing papers on "Scheduling (computing) published in 2005"


Journal ArticleDOI
TL;DR: The fundamental approaches for scheduling under uncertainty: reactive scheduling, stochastic project scheduling, fuzzy project Scheduling, robust (proactive) scheduling and sensitivity analysis are reviewed.

881 citations


Proceedings ArticleDOI
28 Aug 2005
TL;DR: This paper provides necessary conditions to verify the feasibility of rate vectors in next generation fixed wireless broadband networks, and uses them to derive upper bounds on the capacity in terms of achievable throughput, using a fast primal-dual algorithm.
Abstract: Next generation fixed wireless broadband networks are being increasingly deployed as mesh networks in order to provide and extend access to the internet. These networks are characterized by the use of multiple orthogonal channels and nodes with the ability to simultaneously communicate with many neighbors using multiple radios (interfaces) over orthogonal channels. Networks based on the IEEE 802.11a/b/g and 802.16 standards are examples of these systems. However, due to the limited number of available orthogonal channels, interference is still a factor in such networks. In this paper, we propose a network model that captures the key practical aspects of such systems and characterize the constraints binding their behavior. We provide necessary conditions to verify the feasibility of rate vectors in these networks, and use them to derive upper bounds on the capacity in terms of achievable throughput, using a fast primal-dual algorithm. We then develop two link channel assignment schemes, one static and the other dynamic, in order to derive lower bounds on the achievable throughput. We demonstrate through simulations that the dynamic link channel assignment scheme performs close to optimal on the average, while the static link channel assignment algorithm also performs very well. The methods proposed in this paper can be a valuable tool for network designers in planning network deployment and for optimizing different performance objectives.

825 citations


Journal ArticleDOI
TL;DR: The results obtained from the computational study have shown that the proposed algorithm is a viable and effective approach for the multi-objective FJSP, especially for problems on a large scale.

639 citations


Journal ArticleDOI
TL;DR: This paper presents a hybrid genetic algorithm for the job shop scheduling problem that is based on random keys and tested on a set of standard instances taken from the literature and compared with other approaches.

577 citations


Journal ArticleDOI
25 Jul 2005
TL;DR: The paper gives an overview of current research interests in the SymTA/S project and determines system-level performance data such as end-to-end latencies, bus and processor utilisation, and worst-case scheduling scenarios.
Abstract: SymTA/S is a system-level performance and timing analysis approach based on formal scheduling analysis techniques and symbolic simulation. The tool supports heterogeneous architectures, complex task dependencies and context aware analysis. It determines system-level performance data such as end-to-end latencies, bus and processor utilisation, and worst-case scheduling scenarios. SymTA/S furthermore combines optimisation algorithms with system sensitivity analysis for rapid design space exploration. The paper gives an overview of current research interests in the SymTA/S project.

533 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: This paper proposes a cost-based workflow scheduling algorithm that minimizes execution cost while meeting the deadline for delivering results and attempts to optimally solve the task scheduling problem in branches with several sequential tasks by modeling the branch as a Markov decision process and using the value iteration method.
Abstract: Over the last few years, grid technologies have progressed towards a service-oriented paradigm that enables a new way of service provisioning based on utility computing models. Users consume these services based on their QoS (quality of service) requirements. In such "pay-per-use" grids, workflow execution cost must be considered during scheduling based on users' QoS constraints. In this paper, we propose a cost-based workflow scheduling algorithm that minimizes execution cost while meeting the deadline for delivering results. It can also adapt to the delays of service executions by rescheduling unexecuted tasks. We also attempt to optimally solve the task scheduling problem in branches with several sequential tasks by modeling the branch as a Markov decision process and using the value iteration method

469 citations


Proceedings ArticleDOI
13 Mar 2005
TL;DR: This paper studies how the performance of cross-layer rate control can be impacted if the network can only use an imperfect scheduling component that is easier to implement, and designs a fully distributed cross-layered rate control and scheduling algorithm for a restrictive interference model.
Abstract: In this paper, we study cross-layer design for rate control in multihop wireless networks. In our previous work, we have developed an optimal cross-layered rate control scheme that jointly computes both the rate allocation and the stabilizing schedule that controls the resources at the underlying layers. However, the scheduling component in this optimal cross-layered rate control scheme has to solve a complex global optimization problem at each time, and hence is too computationally expensive for online implementation. In this paper, we study how the performance of cross-layer rate control can be impacted if the network can only use an imperfect (and potentially distributed) scheduling component that is easier to implement. We study both the case when the number of users in the system is fixed and the case with dynamic arrivals and departures of the users, and we establish desirable results on the performance bounds of cross-layered rate control with imperfect scheduling. Compared with a layered approach that does not design rate control and scheduling together, our cross-layered approach has provably better performance bounds, and substantially outperforms the layered approach. The insights drawn from our analyses also enable us to design a fully distributed cross-layered rate control and scheduling algorithm for a restrictive interference model.

454 citations


Proceedings ArticleDOI
13 Mar 2005
TL;DR: This paper shows that a combination of queue-length-based scheduling at the base station and congestion control implemented either atThe base station or at the end users can lead to fair resource allocation and queue- length stability.
Abstract: We consider the problem of allocating resources (time slots, frequency, power, etc.) at a base station to many competing flows, where each flow is intended for a different receiver. The channel conditions may be time-varying and different for different receivers. It is well-known that appropriately chosen queue-length based policies are throughput-optimal while other policies based on the estimation of channel statistics can be used to allocate resources fairly (such as proportional fairness) among competing users. In this paper, we show that a combination of queue-length-based scheduling at the base station and congestion control implemented either at the base station or at the end users can lead to fair resource allocation and queue-length stability.

417 citations


Journal ArticleDOI
TL;DR: It is proved that, for any mean arrival rate that lies in the capacity region, the queues will be stable under the policy and it is shown that it is easy to incorporate imperfect queue length information and other approximations that can simplify the implementation of the policy.
Abstract: We study the problem of stable scheduling for a class of wireless networks. The goal is to stabilize the queues holding information to be transmitted over a fading channel. Few assumptions are made on the arrival process statistics other than the assumption that their mean values lie within the capacity region and that they satisfy a version of the law of large numbers. We prove that, for any mean arrival rate that lies in the capacity region, the queues will be stable under our policy. Moreover, we show that it is easy to incorporate imperfect queue length information and other approximations that can simplify the implementation of our policy.

395 citations


Proceedings ArticleDOI
09 May 2005
TL;DR: This work identifies two families of resource allocation algorithms: task-based algorithms that greedily allocate tasks to resources, and workflow- based algorithms that search for an efficient allocation for the entire workflow.
Abstract: Grid applications require allocating a large number of heterogeneous tasks to distributed resources. A good allocation is critical for efficient execution. However, many existing grid toolkits use matchmaking strategies that do not consider overall efficiency for the set of tasks to be run. We identify two families of resource allocation algorithms: task-based algorithms, that greedily allocate tasks to resources, and workflow-based algorithms, that search for an efficient allocation for the entire workflow. We compare the behavior of workflow-based algorithms and task-based algorithms, using simulations of workflows drawn from a real application and with varying ratios of computation cost to data transfer cost. We observe that workflow-based approaches have a potential to work better for data-intensive applications even when estimates about future tasks are inaccurate.

382 citations


Journal ArticleDOI
TL;DR: In this article, the performance at the flow level in a dynamic setting with random finite-size service demands is evaluated by means of a multiclass Processor-Sharing model where the total service rate varies with the total number of users and the model provides explicit formulas for the distribution of the number of active users of the various classes, the mean response times, the blocking probabilities, and the throughput.
Abstract: Channel-aware scheduling strategies, such as the Proportional Fair algorithm for the CDMA 1xEV-DO system, provide an effective mechanism for improving throughput performance in wireless data networks by exploiting channel fluctuations. The performance of channel-aware scheduling algorithms has mostly been explored at the packet level for a static user population, often assuming infinite backlogs. In the present paper, we focus on the performance at the flow level in a dynamic setting with random finite-size service demands. We show that in certain cases the user-level performance may be evaluated by means of a multiclass Processor-Sharing model where the total service rate varies with the total number of users. The latter model provides explicit formulas for the distribution of the number of active users of the various classes, the mean response times, the blocking probabilities, and the throughput. In addition we show that, in the presence of channel variations, greedy, myopic strategies which maximize throughput in a static scenario, may result in sub-optimal throughput performance for a dynamic user configuration and cause potential instability effects.

Journal ArticleDOI
TL;DR: In this paper, an integrated scheduling model of production and distribution operations is proposed for the computer and food catering service industries, where a set of jobs are first processed in a processing facility (e.g., manufacturing plant or service center) and then delivered to the customers directly without intermediate inventory.
Abstract: Motivated by applications in the computer and food catering service industries, we study an integrated scheduling model of production and distribution operations. In this model, a set of jobs (i.e., customer orders) are first processed in a processing facility (e.g., manufacturing plant or service center) and then delivered to the customers directly without intermediate inventory. The problem is to find a joint schedule of production and distribution such that an objective function that takes into account both customer service level and total distribution cost is optimized. Customer service level is measured by a function of the times when the jobs are delivered to the customers. The distribution cost of a delivery shipment consists of a fixed charge and a variable cost proportional to the total distance of the route taken by the shipment. We study two classes of problems under this integrated scheduling model. In the first class of problems, customer service is measured by the average time when the jobs are delivered to the customers; in the second class, customer service is measured by the maximum time when the jobs are delivered to the customers. Two machine configurations in the processing facility--single machine and parallel machine--are considered. For each of the problems studied, we provide an efficient exact algorithm, or a proof of intractability accompanied by a heuristic algorithm with worst-case and asymptotic performance analysis. Computational experiments demonstrate that the heuristics developed are capable of generating near-optimal solutions. We also investigate the possible benefit of using the proposed integrated model relative to a sequential model where production and distribution operations are scheduled sequentially and separately. Computational tests show that in many cases a significant benefit can be achieved by integration.

Proceedings ArticleDOI
24 Apr 2005
TL;DR: A protocol for node sleep scheduling that guarantees a bounded-delay sensing coverage while maximizing network lifetime is proposed that is optimized for rare event detection and allows favorable compromises to be achieved between event detection delay and lifetime without sacrificing (eventual) coverage.
Abstract: Lifetime maximization is one key element in the design of sensor-network-based surveillance applications. We propose a protocol for node sleep scheduling that guarantees a bounded-delay sensing coverage while maximizing network lifetime. Our sleep scheduling ensures that coverage rotates such that each point in the environment is sensed within some finite interval of time, called the detection delay. The framework is optimized for rare event detection and allows favorable compromises to be achieved between event detection delay and lifetime without sacrificing (eventual) coverage for each point. We compare different sleep scheduling policies in terms of average detection delay, and show that ours is closest to the detection delay lower bound for stationary event surveillance. We also explain the inherent relationship between detection delay, which applies to persistent events, and detection probability, which applies to temporary events. Finally, a connectivity maintenance protocol is proposed to minimize the delay of multi-hop delivery to a base-station. The resulting sleep schedule achieves the lowest overall target surveillance delay given constraints on energy consumption.

Journal ArticleDOI
TL;DR: A simple system is presented, which proves its superiority over other schemes for multicarrier transmission, e.g. extended round robin and the PF scheduling scheme for the HDR system.
Abstract: This letter extends the proportional fair (PF) scheduling proposed in the high data rate (HDR) system to multicarrier transmission systems. It is known that the PF allocation (F. P. Kelly et al. (1998)) results in the maximization of the sum of logarithmic average user rates. We propose a PF scheduling that assigns users to each carrier while maximizing the sum of logarithmic average user rates.

Journal ArticleDOI
01 Sep 2005
TL;DR: This paper evaluates three algorithms namely genetic, HEFT, and simple "myopic" and compares incremental workflow partitioning against the full-graph scheduling strategy and demonstrates that full- graph scheduling with the HEFT algorithm performs best.
Abstract: Scheduling is a key concern for the execution of performance-driven Grid applications. In this paper we comparatively examine different existing approaches for scheduling of scientific workflow applications in a Grid environment. We evaluate three algorithms namely genetic, HEFT, and simple "myopic" and compare incremental workflow partitioning against the full-graph scheduling strategy. We demonstrate experiments using real-world scientific applications covering both balanced (symmetric) and unbalanced (asymmetric) workflows. Our results demonstrate that full-graph scheduling with the HEFT algorithm performs best compared to the other strategies examined in this paper.

Proceedings ArticleDOI
12 Nov 2005
TL;DR: This work uses DVS (Dynamic Voltage Scaling) technology now available in high-performance microprocessors to reduce power consumption during parallel application runs when peak CPU performance is not necessary due to load imbalance, communication delays, etc.
Abstract: Left unchecked, the fundamental drive to increase peak performance using tens of thousands of power hungry components will lead to intolerable operating costs and failure rates. High-performance, power-aware distributed computing reduces power and energy consumption of distributed applications and systems without sacrificing performance. Generally, we use DVS (Dynamic Voltage Scaling) technology now available in high-performance microprocessors to reduce power consumption during parallel application runs when peak CPU performance is not necessary due to load imbalance, communication delays, etc. We propose distributed performance-directed DVS scheduling strategies for use in scalable power-aware HPC clusters. By varying scheduling granularity we can obtain significant energy savings without increasing execution time (36% for FT from NAS PB). We created a software framework to implement and evaluate our various techniques and show performance-directed scheduling consistently saves more energy (nearly 25% for several codes) than comparable approaches with less impact on execution time (< 5%). Additionally, we illustrate the use of energy-delay products to automatically select distributed DVS schedules that meet users’ needs.

Proceedings ArticleDOI
05 Dec 2005
TL;DR: This work adopts here an interference-aware cross-layer design to increase the throughput of the wireless mesh network and creates a tree-based routing framework, which along with scheduling is interference aware and results in a much higher spectral efficiency.
Abstract: The IEEE 802.16 WiMax standard provides a mechanism for creating multi-hop mesh, which can be deployed as a high speed wide-area wireless network To realize the full potential of such high-speed IEEE 802.16 mesh networks, two efficient wireless radio resource allocation extensions were developed The objective of this paper is to propose an efficient approach for increasing the utilization of WiMax mesh through appropriate design of multi-hop routing and scheduling. As multiple-access interference is a major limiting factor for wireless communication systems, we adopt here an interference-aware cross-layer design to increase the throughput of the wireless mesh network. In particular, our scheme creates a tree-based routing framework, which along with scheduling is interference aware and results in a much higher spectral efficiency. Performance evaluation results show that the proposed interference-aware scheme achieves significant throughput enhancement over the basic IEEE 802.16 mesh network.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed algorithm is able to schedule transmissions such that the bandwidth allocated to different flows is proportional to their weights.
Abstract: Fairness is an important issue when accessing a shared wireless channel. With fair scheduling, it is possible to allocate bandwidth in proportion to weights of the packet flows sharing the channel. This paper presents a fully distributed algorithm for fair scheduling in a wireless LAN. The algorithm can be implemented without using a centralized coordinator to arbitrate medium access. The proposed protocol is derived from the Distributed Coordination Function in the IEEE 802.11 standard. Simulation results show that the proposed algorithm is able to schedule transmissions such that the bandwidth allocated to different flows is proportional to their weights. An attractive feature of the proposed approach is that it can be implemented with simple modifications to the IEEE 802.11 standard.

Journal ArticleDOI
TL;DR: The results show that DSA is superior to DBA when controlled properly, having better or competitive solution quality and significantly lower communication cost than DBA, and is the algorithm of choice for distributed scheduling problems and other distributed problems of similar properties.

Journal ArticleDOI
TL;DR: This paper reviews the advances of mixed-integer linear programming (MILP) based approaches for the scheduling of chemical processing systems and focuses on the short-term scheduling of general network represented processes.
Abstract: This paper reviews the advances of mixed-integer linear programming (MILP) based approaches for the scheduling of chemical processing systems. We focus on the short-term scheduling of general network represented processes. First, the various mathematical models that have been proposed in the literature are classified mainly based on the time representation. Discrete-time and continuous-time models are presented along with their strengths and limitations. Several classes of approaches for improving the computational efficiency in the solution of MILP problems are discussed. Furthermore, a summary of computational experiences and applications is provided. The paper concludes with perspectives on future research directions for MILP based process scheduling technologies.

Journal ArticleDOI
TL;DR: Asymptotic optimality of the gradient scheduling algorithm (which generalizes the well-known proportional fair algorithm) is proved for this model, which allows for simultaneous service of multiple users and for discrete sets of scheduling decisions.
Abstract: We consider the model whereN queues (users) are served in discrete time by a generalized switch. The switch state is random, and it determines the set of possible service rate choices (scheduling decisions) in each time slot. This model is primarily motivated by the problem of scheduling transmissions ofN data users in a shared time-varying wireless environment, but also includes other applications such as input-queued cross-bar switches and parallel flexible server systems.The objective is to find a scheduling strategy maximizing a concave utility functionH( u1,..., uN ), whereu n s are long-term average service rates (data throughputs) of the users, assuming users always have data to be served.We prove asymptotic optimality of the gradient scheduling algorithm (which generalizes the well-known proportional fair algorithm) for our model, which, in particular, allows for simultaneous service of multiple users and for discrete sets of scheduling decisions. Analysis of the transient dynamics of user throughputs is the key part of this work.

Proceedings ArticleDOI
13 Mar 2005
TL;DR: This work considers the joint optimal design of the physical, medium access control (MAC), and routing layers to maximize the lifetime of energy-constrained wireless sensor networks and proposes an iterative algorithm that alternates between adaptive link scheduling and computation of optimal link rates and transmission powers for a fixed link schedule.
Abstract: We consider the joint optimal design of physical, medium access control (MAC), and routing layers to maximize the lifetime of energy-constrained wireless sensor networks. The problem of computing a lifetime-optimal routing flow, link schedule, and link transmission powers is formulated as a non-linear optimization problem. We first restrict the link schedules to the class of interference-free time division multiple access (TDMA) schedules. In this special case we formulate the optimization problem as a mixed integer-convex program, which can be solved using standard techniques. For general non-orthogonal link schedules, we propose an iterative algorithm that alternates between adaptive link scheduling and computation of optimal link rates and transmission powers for a fixed link schedule. The performance of this algorithm is compared to other design approaches for several network topologies. The results illustrate the advantages of load balancing, multihop routing, frequency reuse, and interference mitigation in increasing the lifetime of energy-constrained networks. We also describe a partially distributed algorithm to compute optimal rates and transmission powers for a given link schedule.

Journal ArticleDOI
Guocong Song1, Ye Li
TL;DR: A cross-layer resource management framework leveraged by utility optimization is presented that includes utility-based resource management and QoS architecture, resource allocation algorithms, rate-based and delay-based multichannel scheduling, and theoretical exploration of the fundamental mechanisms in wireless resource management.
Abstract: This article discusses downlink resource allocation and scheduling for OFDM-based broadband wireless networks. We present a cross-layer resource management framework leveraged by utility optimization. It includes utility-based resource management and QoS architecture, resource allocation algorithms, rate-based and delay-based multichannel scheduling that exploits wireless channel and queue information, and theoretical exploration of the fundamental mechanisms in wireless resource management, such as capacity, fairness, and stability. We also provide a solution that can efficiently allocate resources for heterogeneous traffic with diverse QoS requirements.

Journal ArticleDOI
W. C. Ng1
TL;DR: A dynamic programming-based heuristic to solve the scheduling problem and an algorithm to find lower bounds for benchmarking the schedules found by the heuristic are developed.

Proceedings ArticleDOI
05 Dec 2005
TL;DR: A generalized proportional fair (GPF) scheduling algorithm is presented, which allows tweaking the trade-off between fairness and throughput performance for best effort traffic in a cellular downlink scenario.
Abstract: In this paper, a generalized proportional fair (GPF) scheduling algorithm is presented, which allows tweaking the trade-off between fairness and throughput performance for best effort traffic in a cellular downlink scenario. The GPF is extended to frequency scheduling in an OFDMA system by performing dynamic channel allocation on a subband basis including link adaptation by adaptive modulation and coding. In this way, multiuser diversity can be utilized in time domain - as for CDMA - and in frequency domain. Compared to a system without frequency scheduling, this increases the system throughput and yields an improved fairness with respect to the allocated resources and with respect to the achieved data-rate per user. OFDMA system level simulations are carried out in order to analyze short/long-term fairness, multiuser diversity gain and system throughput of various GPF configurations with and without applying frequency scheduling.

Proceedings ArticleDOI
09 May 2005
TL;DR: The design choices and the evaluation of a batch scheduler for large clusters, named OAR, which is based upon an original design that emphasizes on low software complexity by using high level tools is presented.
Abstract: In this article we present the design choices and the evaluation of a batch scheduler for large clusters, named OAR. This batch scheduler is based upon an original design that emphasizes on low software complexity by using high level tools. The global architecture is built upon the scripting language Perl and the relational database engine Mysql. The goal of the project OAR is to prove that it is possible today to build a complex system for resource management using such tools without sacrificing efficiency and scalability. Currently, our system offers most of the important features implemented by other batch schedulers such as priority scheduling (by queues), reservations, backfilling and some global computing support. Despite the use of high level tools, our experiments show that our system has performances close to other systems. Furthermore, OAR is currently exploited for the management of 700 nodes (a metropolitan grid) and has shown good efficiency and robustness.

Proceedings ArticleDOI
16 May 2005
TL;DR: Simulation results show the proposed architecture can meet the QoS requirement in terms of bandwidth and fairness for all types of traffic.
Abstract: A fair and efficient service flow management architecture for IEEE802.16 broadband wireless access (BWA) systems is proposed for TDD mode. Compared with the traditional fixed bandwidth allocation, the proposed architecture adjusts uplink and downlink bandwidth dynamically to achieve higher throughput for unbalanced traffic. A deficit fair priority queue scheduling algorithm is deployed to serve different types of service flows in both uplink and downlink, which provides more fairness to the system. Simulation results show the proposed architecture can meet the QoS requirement in terms of bandwidth and fairness for all types of traffic.

Journal ArticleDOI
TL;DR: Under a mild assumption on network structure, it is proved that a network operating under a maximum pressure policy achieves maximum throughput predicted by LPs, and identifies a class of networks for which the nonpreemptive, non-processor-splitting version of amaximum pressure policy is still throughput optimal.
Abstract: Complex systems like semiconductor wafer fabrication facilities (fabs), networks of data switches, and large-scale call centers all demand efficient resource allocation. Deterministic models like linear programs (LP) have been used for capacity planning at both the design and expansion stages of such systems. LP-based planning is critical in setting a medium range or long-term goal for many systems, but it does not translate into a day-to-day operational policy that must deal with discreteness of jobs and the randomness of the processing environment.A stochastic processing network, advanced by J. Michael Harrison (2000, 2002, 2003), is a system that takes inputs of materials of various kinds and uses various processing resources to produce outputs of materials of various kinds. Such a network provides a powerful abstraction of a wide range of real-world systems. It provides high-fidelity stochastic models in diverse economic sectors including manufacturing, service, and information technology.We propose a family of maximum pressure service policies for dynamically allocating service capacities in a stochastic processing network. Under a mild assumption on network structure, we prove that a network operating under a maximum pressure policy achieves maximum throughput predicted by LPs. These policies are semilocal in the sense that each server makes its decision based on the buffer content in its serviceable buffers and their immediately downstream buffers. In particular, their implementation does not use arrival rate information, which is difficult to collect in many applications. We also identify a class of networks for which the nonpreemptive, non-processor-splitting version of a maximum pressure policy is still throughput optimal. Applications to queueing networks with alternate routes and networks of data switches are presented.

Proceedings ArticleDOI
06 Jul 2005
TL;DR: This paper introduces a new schedulability test that improves significantly the percentage of accepted task sets, especially when considering task sets containing heavy tasks, and shows the effectiveness of the proposed test through an extensive set of experiments.
Abstract: Multiprocessor hardware platforms are now being considered for embedded systems, due to their high computational power and little additional cost when compared to single processor systems. When scheduling real-time applications on multiprocessor platforms, a possibility is to use global scheduling, where a scheduling algorithm dynamically assign tasks to processors, and tasks can migrate from one processor to another during their execution. In this paper, we tackle the problem of schedulability analysis of sporadic tasks in global scheduling systems, where the scheduler is the earliest deadline first (EDF) algorithm. We provide two main contributions. First, we show that two recently proposed tests perform poorly when the task set contains heavy tasks (i.e. tasks with high utilization). We also show that neither test dominates the other. As a second contribution, we introduce a new schedulability test that improves significantly the percentage of accepted task sets, especially when considering task sets containing heavy tasks. We show the effectiveness of the proposed test through an extensive set of experiments.

Proceedings ArticleDOI
06 Jun 2005
TL;DR: Analytical performance evaluation models and distributed algorithms for routing and scheduling which incorporate fairness, energy and dilation (path-length) requirements and provide a unified framework for utilizing the network close to its maximum throughput capacity are developed.
Abstract: This paper considers two inter-related questions: (i) Given a wireless ad-hoc network and a collection of source-destination pairs {(si,ti)}, what is the maximum throughput capacity of the network, i.e. the rate at which data from the sources to their corresponding destinations can be transferred in the network? (ii) Can network protocols be designed that jointly route the packets and schedule transmissions at rates close to the maximum throughput capacity? Much of the earlier work focused on random instances and proved analytical lower and upper bounds on the maximum throughput capacity. Here, in contrast, we consider arbitrary wireless networks. Further, we study the algorithmic aspects of the above questions: the goal is to design provably good algorithms for arbitrary instances. We develop analytical performance evaluation models and distributed algorithms for routing and scheduling which incorporate fairness, energy and dilation (path-length) requirements and provide a unified framework for utilizing the network close to its maximum throughput capacity.Motivated by certain popular wireless protocols used in practice, we also explore "shortest-path like" path selection strategies which maximize the network throughput. The theoretical results naturally suggest an interesting class of congestion aware link metrics which can be directly plugged into several existing routing protocols such as AODV, DSR, etc. We complement the theoretical analysis with extensive simulations. The results indicate that routes obtained using our congestion aware link metrics consistently yield higher throughput than hop-count based shortest path metrics.