scispace - formally typeset
Search or ask a question

Showing papers on "Scheduling (computing) published in 2005"


Journal ArticleDOI
TL;DR: The history and philosophy of the Condor project is provided and how it has interacted with other projects and evolved along with the field of distributed computing is described.
Abstract: SUMMARY Since 1984, the Condor project has enabled ordinary users to do extraordinary computing. Today, the project continues to explore the social and technical problems of cooperative computing on scales ranging from the desktop to the world-wide computational Grid. In this paper, we provide the history and philosophy of the Condor project and describe how it has interacted with other projects and evolved along with the field of distributed computing. We outline the core components of the Condor system and describe how the technology of computing must correspond to social structures. Throughout, we reflect on the lessons of experience and chart the course travelled by research ideas as they grow into production systems. Copyright c � 2005 John Wiley & Sons, Ltd.

1,969 citations


Journal ArticleDOI
TL;DR: This work proposes a scheme that constructs M random beams and that transmits information to the users with the highest signal-to-noise-plus-interference ratios (SINRs), which can be made available to the transmitter with very little feedback.
Abstract: In multiple-antenna broadcast channels, unlike point-to-point multiple-antenna channels, the multiuser capacity depends heavily on whether the transmitter knows the channel coefficients to each user. For instance, in a Gaussian broadcast channel with M transmit antennas and n single-antenna users, the sum rate capacity scales like Mloglogn for large n if perfect channel state information (CSI) is available at the transmitter, yet only logarithmically with M if it is not. In systems with large n, obtaining full CSI from all users may not be feasible. Since lack of CSI does not lead to multiuser gains, it is therefore of interest to investigate transmission schemes that employ only partial CSI. We propose a scheme that constructs M random beams and that transmits information to the users with the highest signal-to-noise-plus-interference ratios (SINRs), which can be made available to the transmitter with very little feedback. For fixed M and n increasing, the throughput of our scheme scales as MloglognN, where N is the number of receive antennas of each user. This is precisely the same scaling obtained with perfect CSI using dirty paper coding. We furthermore show that a linear increase in throughput with M can be obtained provided that M does not not grow faster than logn. We also study the fairness of our scheduling in a heterogeneous network and show that, when M is large enough, the system becomes interference dominated and the probability of transmitting to any user converges to 1/n, irrespective of its path loss. In fact, using M=/spl alpha/logn transmit antennas emerges as a desirable operating point, both in terms of providing linear scaling of the throughput with M as well as in guaranteeing fairness.

1,450 citations


Journal ArticleDOI
TL;DR: The fundamental approaches for scheduling under uncertainty: reactive scheduling, stochastic project scheduling, fuzzy project Scheduling, robust (proactive) scheduling and sensitivity analysis are reviewed.

881 citations


Proceedings ArticleDOI
28 Aug 2005
TL;DR: This paper provides necessary conditions to verify the feasibility of rate vectors in next generation fixed wireless broadband networks, and uses them to derive upper bounds on the capacity in terms of achievable throughput, using a fast primal-dual algorithm.
Abstract: Next generation fixed wireless broadband networks are being increasingly deployed as mesh networks in order to provide and extend access to the internet. These networks are characterized by the use of multiple orthogonal channels and nodes with the ability to simultaneously communicate with many neighbors using multiple radios (interfaces) over orthogonal channels. Networks based on the IEEE 802.11a/b/g and 802.16 standards are examples of these systems. However, due to the limited number of available orthogonal channels, interference is still a factor in such networks. In this paper, we propose a network model that captures the key practical aspects of such systems and characterize the constraints binding their behavior. We provide necessary conditions to verify the feasibility of rate vectors in these networks, and use them to derive upper bounds on the capacity in terms of achievable throughput, using a fast primal-dual algorithm. We then develop two link channel assignment schemes, one static and the other dynamic, in order to derive lower bounds on the achievable throughput. We demonstrate through simulations that the dynamic link channel assignment scheme performs close to optimal on the average, while the static link channel assignment algorithm also performs very well. The methods proposed in this paper can be a valuable tool for network designers in planning network deployment and for optimizing different performance objectives.

825 citations


Journal ArticleDOI
TL;DR: The results obtained from the computational study have shown that the proposed algorithm is a viable and effective approach for the multi-objective FJSP, especially for problems on a large scale.

639 citations


Journal ArticleDOI
TL;DR: The design and analysis of novel protocols that can dynamically configure a network to achieve guaranteed degrees of coverage and connectivity and demonstrate the capability of these protocols to provide guaranteed Coverage Configuration Protocol configurations through both geometric analysis and extensive simulations are presented.
Abstract: An effective approach for energy conservation in wireless sensor networks is scheduling sleep intervals for extraneous nodes while the remaining nodes stay active to provide continuous service. For the sensor network to operate successfully, the active nodes must maintain both sensing coverage and network connectivity. Furthermore, the network must be able to configure itself to any feasible degree of coverage and connectivity in order to support different applications and environments with diverse requirements. This article presents the design and analysis of novel protocols that can dynamically configure a network to achieve guaranteed degrees of coverage and connectivity. This work differs from existing connectivity or coverage maintenance protocols in several key ways. (1) We present a Coverage Configuration Protocol (CCP) that can provide different degrees of coverage requested by applications. This flexibility allows the network to self-configure for a wide range of applications and (possibly dynamic) environments. (2) We provide a geometric analysis of the relationship between coverage and connectivity. This analysis yields key insights for treating coverage and connectivity within a unified framework; in sharp contrast to several existing approaches that address the two problems in isolation. (3) We integrate CCP with SPAN to provide both coverage and connectivity guarantees. (4) We propose a probabilistic coverage model and extend CCP to provide probabilistic coverage guarantees. We demonstrate the capability of our protocols to provide guaranteed coverage and connectivity configurations through both geometric analysis and extensive simulations.

600 citations


Journal ArticleDOI
TL;DR: This paper presents a hybrid genetic algorithm for the job shop scheduling problem that is based on random keys and tested on a set of standard instances taken from the literature and compared with other approaches.

577 citations


Journal ArticleDOI
TL;DR: The capacity-achieving algorithm is a modified version of the Grossglauser-Tse two-hop relay algorithm and provides O(N) delay, and it is shown that redundancy cannot increase capacity, but can significantly improve delay.
Abstract: We consider the throughput/delay tradeoffs for scheduling data transmissions in a mobile ad hoc network. To reduce delays in the network, each user sends redundant packets along multiple paths to the destination. Assuming the network has a cell partitioned structure and users move according to a simplified independent and identically distributed (i.i.d.) mobility model, we compute the exact network capacity and the exact end-to-end queueing delay when no redundancy is used. The capacity-achieving algorithm is a modified version of the Grossglauser-Tse two-hop relay algorithm and provides O(N) delay (where N is the number of users). We then show that redundancy cannot increase capacity, but can significantly improve delay. The following necessary tradeoff is established: delay/rate/spl ges/O(N). Two protocols that use redundancy and operate near the boundary of this curve are developed, with delays of O(/spl radic/N) and O(log(N)), respectively. Networks with non-i.i.d. mobility are also considered and shown through simulation to closely match the performance of i.i.d. systems in the O(/spl radic/N) delay regime.

568 citations


Journal ArticleDOI
25 Jul 2005
TL;DR: The paper gives an overview of current research interests in the SymTA/S project and determines system-level performance data such as end-to-end latencies, bus and processor utilisation, and worst-case scheduling scenarios.
Abstract: SymTA/S is a system-level performance and timing analysis approach based on formal scheduling analysis techniques and symbolic simulation. The tool supports heterogeneous architectures, complex task dependencies and context aware analysis. It determines system-level performance data such as end-to-end latencies, bus and processor utilisation, and worst-case scheduling scenarios. SymTA/S furthermore combines optimisation algorithms with system sensitivity analysis for rapid design space exploration. The paper gives an overview of current research interests in the SymTA/S project.

533 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: This paper proposes a cost-based workflow scheduling algorithm that minimizes execution cost while meeting the deadline for delivering results and attempts to optimally solve the task scheduling problem in branches with several sequential tasks by modeling the branch as a Markov decision process and using the value iteration method.
Abstract: Over the last few years, grid technologies have progressed towards a service-oriented paradigm that enables a new way of service provisioning based on utility computing models. Users consume these services based on their QoS (quality of service) requirements. In such "pay-per-use" grids, workflow execution cost must be considered during scheduling based on users' QoS constraints. In this paper, we propose a cost-based workflow scheduling algorithm that minimizes execution cost while meeting the deadline for delivering results. It can also adapt to the delays of service executions by rescheduling unexecuted tasks. We also attempt to optimally solve the task scheduling problem in branches with several sequential tasks by modeling the branch as a Markov decision process and using the value iteration method

469 citations


Proceedings ArticleDOI
13 Mar 2005
TL;DR: This paper studies how the performance of cross-layer rate control can be impacted if the network can only use an imperfect scheduling component that is easier to implement, and designs a fully distributed cross-layered rate control and scheduling algorithm for a restrictive interference model.
Abstract: In this paper, we study cross-layer design for rate control in multihop wireless networks. In our previous work, we have developed an optimal cross-layered rate control scheme that jointly computes both the rate allocation and the stabilizing schedule that controls the resources at the underlying layers. However, the scheduling component in this optimal cross-layered rate control scheme has to solve a complex global optimization problem at each time, and hence is too computationally expensive for online implementation. In this paper, we study how the performance of cross-layer rate control can be impacted if the network can only use an imperfect (and potentially distributed) scheduling component that is easier to implement. We study both the case when the number of users in the system is fixed and the case with dynamic arrivals and departures of the users, and we establish desirable results on the performance bounds of cross-layered rate control with imperfect scheduling. Compared with a layered approach that does not design rate control and scheduling together, our cross-layered approach has provably better performance bounds, and substantially outperforms the layered approach. The insights drawn from our analyses also enable us to design a fully distributed cross-layered rate control and scheduling algorithm for a restrictive interference model.

Proceedings ArticleDOI
13 Mar 2005
TL;DR: This paper shows that a combination of queue-length-based scheduling at the base station and congestion control implemented either atThe base station or at the end users can lead to fair resource allocation and queue- length stability.
Abstract: We consider the problem of allocating resources (time slots, frequency, power, etc.) at a base station to many competing flows, where each flow is intended for a different receiver. The channel conditions may be time-varying and different for different receivers. It is well-known that appropriately chosen queue-length based policies are throughput-optimal while other policies based on the estimation of channel statistics can be used to allocate resources fairly (such as proportional fairness) among competing users. In this paper, we show that a combination of queue-length-based scheduling at the base station and congestion control implemented either at the base station or at the end users can lead to fair resource allocation and queue-length stability.

Journal ArticleDOI
TL;DR: It is proved that, for any mean arrival rate that lies in the capacity region, the queues will be stable under the policy and it is shown that it is easy to incorporate imperfect queue length information and other approximations that can simplify the implementation of the policy.
Abstract: We study the problem of stable scheduling for a class of wireless networks. The goal is to stabilize the queues holding information to be transmitted over a fading channel. Few assumptions are made on the arrival process statistics other than the assumption that their mean values lie within the capacity region and that they satisfy a version of the law of large numbers. We prove that, for any mean arrival rate that lies in the capacity region, the queues will be stable under our policy. Moreover, we show that it is easy to incorporate imperfect queue length information and other approximations that can simplify the implementation of our policy.

Proceedings ArticleDOI
09 May 2005
TL;DR: This work identifies two families of resource allocation algorithms: task-based algorithms that greedily allocate tasks to resources, and workflow- based algorithms that search for an efficient allocation for the entire workflow.
Abstract: Grid applications require allocating a large number of heterogeneous tasks to distributed resources. A good allocation is critical for efficient execution. However, many existing grid toolkits use matchmaking strategies that do not consider overall efficiency for the set of tasks to be run. We identify two families of resource allocation algorithms: task-based algorithms, that greedily allocate tasks to resources, and workflow-based algorithms, that search for an efficient allocation for the entire workflow. We compare the behavior of workflow-based algorithms and task-based algorithms, using simulations of workflows drawn from a real application and with varying ratios of computation cost to data transfer cost. We observe that workflow-based approaches have a potential to work better for data-intensive applications even when estimates about future tasks are inaccurate.

Journal ArticleDOI
TL;DR: In this article, the performance at the flow level in a dynamic setting with random finite-size service demands is evaluated by means of a multiclass Processor-Sharing model where the total service rate varies with the total number of users and the model provides explicit formulas for the distribution of the number of active users of the various classes, the mean response times, the blocking probabilities, and the throughput.
Abstract: Channel-aware scheduling strategies, such as the Proportional Fair algorithm for the CDMA 1xEV-DO system, provide an effective mechanism for improving throughput performance in wireless data networks by exploiting channel fluctuations. The performance of channel-aware scheduling algorithms has mostly been explored at the packet level for a static user population, often assuming infinite backlogs. In the present paper, we focus on the performance at the flow level in a dynamic setting with random finite-size service demands. We show that in certain cases the user-level performance may be evaluated by means of a multiclass Processor-Sharing model where the total service rate varies with the total number of users. The latter model provides explicit formulas for the distribution of the number of active users of the various classes, the mean response times, the blocking probabilities, and the throughput. In addition we show that, in the presence of channel variations, greedy, myopic strategies which maximize throughput in a static scenario, may result in sub-optimal throughput performance for a dynamic user configuration and cause potential instability effects.

Journal ArticleDOI
TL;DR: In this paper, an integrated scheduling model of production and distribution operations is proposed for the computer and food catering service industries, where a set of jobs are first processed in a processing facility (e.g., manufacturing plant or service center) and then delivered to the customers directly without intermediate inventory.
Abstract: Motivated by applications in the computer and food catering service industries, we study an integrated scheduling model of production and distribution operations. In this model, a set of jobs (i.e., customer orders) are first processed in a processing facility (e.g., manufacturing plant or service center) and then delivered to the customers directly without intermediate inventory. The problem is to find a joint schedule of production and distribution such that an objective function that takes into account both customer service level and total distribution cost is optimized. Customer service level is measured by a function of the times when the jobs are delivered to the customers. The distribution cost of a delivery shipment consists of a fixed charge and a variable cost proportional to the total distance of the route taken by the shipment. We study two classes of problems under this integrated scheduling model. In the first class of problems, customer service is measured by the average time when the jobs are delivered to the customers; in the second class, customer service is measured by the maximum time when the jobs are delivered to the customers. Two machine configurations in the processing facility--single machine and parallel machine--are considered. For each of the problems studied, we provide an efficient exact algorithm, or a proof of intractability accompanied by a heuristic algorithm with worst-case and asymptotic performance analysis. Computational experiments demonstrate that the heuristics developed are capable of generating near-optimal solutions. We also investigate the possible benefit of using the proposed integrated model relative to a sequential model where production and distribution operations are scheduled sequentially and separately. Computational tests show that in many cases a significant benefit can be achieved by integration.

Proceedings ArticleDOI
24 Apr 2005
TL;DR: A protocol for node sleep scheduling that guarantees a bounded-delay sensing coverage while maximizing network lifetime is proposed that is optimized for rare event detection and allows favorable compromises to be achieved between event detection delay and lifetime without sacrificing (eventual) coverage.
Abstract: Lifetime maximization is one key element in the design of sensor-network-based surveillance applications. We propose a protocol for node sleep scheduling that guarantees a bounded-delay sensing coverage while maximizing network lifetime. Our sleep scheduling ensures that coverage rotates such that each point in the environment is sensed within some finite interval of time, called the detection delay. The framework is optimized for rare event detection and allows favorable compromises to be achieved between event detection delay and lifetime without sacrificing (eventual) coverage for each point. We compare different sleep scheduling policies in terms of average detection delay, and show that ours is closest to the detection delay lower bound for stationary event surveillance. We also explain the inherent relationship between detection delay, which applies to persistent events, and detection probability, which applies to temporary events. Finally, a connectivity maintenance protocol is proposed to minimize the delay of multi-hop delivery to a base-station. The resulting sleep schedule achieves the lowest overall target surveillance delay given constraints on energy consumption.

Journal ArticleDOI
TL;DR: A simple system is presented, which proves its superiority over other schemes for multicarrier transmission, e.g. extended round robin and the PF scheduling scheme for the HDR system.
Abstract: This letter extends the proportional fair (PF) scheduling proposed in the high data rate (HDR) system to multicarrier transmission systems. It is known that the PF allocation (F. P. Kelly et al. (1998)) results in the maximization of the sum of logarithmic average user rates. We propose a PF scheduling that assigns users to each carrier while maximizing the sum of logarithmic average user rates.

Journal ArticleDOI
01 Sep 2005
TL;DR: This paper evaluates three algorithms namely genetic, HEFT, and simple "myopic" and compares incremental workflow partitioning against the full-graph scheduling strategy and demonstrates that full- graph scheduling with the HEFT algorithm performs best.
Abstract: Scheduling is a key concern for the execution of performance-driven Grid applications. In this paper we comparatively examine different existing approaches for scheduling of scientific workflow applications in a Grid environment. We evaluate three algorithms namely genetic, HEFT, and simple "myopic" and compare incremental workflow partitioning against the full-graph scheduling strategy. We demonstrate experiments using real-world scientific applications covering both balanced (symmetric) and unbalanced (asymmetric) workflows. Our results demonstrate that full-graph scheduling with the HEFT algorithm performs best compared to the other strategies examined in this paper.

Proceedings ArticleDOI
12 Nov 2005
TL;DR: This work uses DVS (Dynamic Voltage Scaling) technology now available in high-performance microprocessors to reduce power consumption during parallel application runs when peak CPU performance is not necessary due to load imbalance, communication delays, etc.
Abstract: Left unchecked, the fundamental drive to increase peak performance using tens of thousands of power hungry components will lead to intolerable operating costs and failure rates. High-performance, power-aware distributed computing reduces power and energy consumption of distributed applications and systems without sacrificing performance. Generally, we use DVS (Dynamic Voltage Scaling) technology now available in high-performance microprocessors to reduce power consumption during parallel application runs when peak CPU performance is not necessary due to load imbalance, communication delays, etc. We propose distributed performance-directed DVS scheduling strategies for use in scalable power-aware HPC clusters. By varying scheduling granularity we can obtain significant energy savings without increasing execution time (36% for FT from NAS PB). We created a software framework to implement and evaluate our various techniques and show performance-directed scheduling consistently saves more energy (nearly 25% for several codes) than comparable approaches with less impact on execution time (< 5%). Additionally, we illustrate the use of energy-delay products to automatically select distributed DVS schedules that meet users’ needs.

Proceedings ArticleDOI
05 Dec 2005
TL;DR: This work adopts here an interference-aware cross-layer design to increase the throughput of the wireless mesh network and creates a tree-based routing framework, which along with scheduling is interference aware and results in a much higher spectral efficiency.
Abstract: The IEEE 802.16 WiMax standard provides a mechanism for creating multi-hop mesh, which can be deployed as a high speed wide-area wireless network To realize the full potential of such high-speed IEEE 802.16 mesh networks, two efficient wireless radio resource allocation extensions were developed The objective of this paper is to propose an efficient approach for increasing the utilization of WiMax mesh through appropriate design of multi-hop routing and scheduling. As multiple-access interference is a major limiting factor for wireless communication systems, we adopt here an interference-aware cross-layer design to increase the throughput of the wireless mesh network. In particular, our scheme creates a tree-based routing framework, which along with scheduling is interference aware and results in a much higher spectral efficiency. Performance evaluation results show that the proposed interference-aware scheme achieves significant throughput enhancement over the basic IEEE 802.16 mesh network.

Proceedings ArticleDOI
04 Apr 2005
TL;DR: Analytical and experimental results show that in comparison to SMAC, PMAC achieves more power savings under light loads, and higher throughput under heavier traffic loads.
Abstract: We propose a novel adaptive MAC protocol for wireless sensor networks. In existing protocols such as SMAC, the sensor nodes are put to sleep periodically to save energy. As the duty cycle is fixed in such protocols, the network throughput can degrade under heavy traffic, while under light loads, unwanted energy consumption can occur. In the proposed pattern-MAC (PMAC) protocol, instead of having fixed sleep-wakeups, the sleep-wakeup schedules of the sensor nodes are adaptively determined. The schedules are decided based on a node's own traffic and that of its neighbors. Our analytical and experimental results show that in comparison to SMAC, PMAC achieves more power savings under light loads, and higher throughput under heavier traffic loads. Furthermore, unlike SMAC, only the sensor nodes involved in communication wake up frequently in PMAC and hence energy is conserved in other sensor nodes.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed algorithm is able to schedule transmissions such that the bandwidth allocated to different flows is proportional to their weights.
Abstract: Fairness is an important issue when accessing a shared wireless channel. With fair scheduling, it is possible to allocate bandwidth in proportion to weights of the packet flows sharing the channel. This paper presents a fully distributed algorithm for fair scheduling in a wireless LAN. The algorithm can be implemented without using a centralized coordinator to arbitrate medium access. The proposed protocol is derived from the Distributed Coordination Function in the IEEE 802.11 standard. Simulation results show that the proposed algorithm is able to schedule transmissions such that the bandwidth allocated to different flows is proportional to their weights. An attractive feature of the proposed approach is that it can be implemented with simple modifications to the IEEE 802.11 standard.

Journal ArticleDOI
TL;DR: The results show that DSA is superior to DBA when controlled properly, having better or competitive solution quality and significantly lower communication cost than DBA, and is the algorithm of choice for distributed scheduling problems and other distributed problems of similar properties.

Journal ArticleDOI
TL;DR: This paper reviews the advances of mixed-integer linear programming (MILP) based approaches for the scheduling of chemical processing systems and focuses on the short-term scheduling of general network represented processes.
Abstract: This paper reviews the advances of mixed-integer linear programming (MILP) based approaches for the scheduling of chemical processing systems. We focus on the short-term scheduling of general network represented processes. First, the various mathematical models that have been proposed in the literature are classified mainly based on the time representation. Discrete-time and continuous-time models are presented along with their strengths and limitations. Several classes of approaches for improving the computational efficiency in the solution of MILP problems are discussed. Furthermore, a summary of computational experiences and applications is provided. The paper concludes with perspectives on future research directions for MILP based process scheduling technologies.

Journal ArticleDOI
TL;DR: Asymptotic optimality of the gradient scheduling algorithm (which generalizes the well-known proportional fair algorithm) is proved for this model, which allows for simultaneous service of multiple users and for discrete sets of scheduling decisions.
Abstract: We consider the model whereN queues (users) are served in discrete time by a generalized switch. The switch state is random, and it determines the set of possible service rate choices (scheduling decisions) in each time slot. This model is primarily motivated by the problem of scheduling transmissions ofN data users in a shared time-varying wireless environment, but also includes other applications such as input-queued cross-bar switches and parallel flexible server systems.The objective is to find a scheduling strategy maximizing a concave utility functionH( u1,..., uN ), whereu n s are long-term average service rates (data throughputs) of the users, assuming users always have data to be served.We prove asymptotic optimality of the gradient scheduling algorithm (which generalizes the well-known proportional fair algorithm) for our model, which, in particular, allows for simultaneous service of multiple users and for discrete sets of scheduling decisions. Analysis of the transient dynamics of user throughputs is the key part of this work.

Proceedings ArticleDOI
13 Mar 2005
TL;DR: This work considers the joint optimal design of the physical, medium access control (MAC), and routing layers to maximize the lifetime of energy-constrained wireless sensor networks and proposes an iterative algorithm that alternates between adaptive link scheduling and computation of optimal link rates and transmission powers for a fixed link schedule.
Abstract: We consider the joint optimal design of physical, medium access control (MAC), and routing layers to maximize the lifetime of energy-constrained wireless sensor networks. The problem of computing a lifetime-optimal routing flow, link schedule, and link transmission powers is formulated as a non-linear optimization problem. We first restrict the link schedules to the class of interference-free time division multiple access (TDMA) schedules. In this special case we formulate the optimization problem as a mixed integer-convex program, which can be solved using standard techniques. For general non-orthogonal link schedules, we propose an iterative algorithm that alternates between adaptive link scheduling and computation of optimal link rates and transmission powers for a fixed link schedule. The performance of this algorithm is compared to other design approaches for several network topologies. The results illustrate the advantages of load balancing, multihop routing, frequency reuse, and interference mitigation in increasing the lifetime of energy-constrained networks. We also describe a partially distributed algorithm to compute optimal rates and transmission powers for a given link schedule.

Journal ArticleDOI
Guocong Song1, Ye Li
TL;DR: A cross-layer resource management framework leveraged by utility optimization is presented that includes utility-based resource management and QoS architecture, resource allocation algorithms, rate-based and delay-based multichannel scheduling, and theoretical exploration of the fundamental mechanisms in wireless resource management.
Abstract: This article discusses downlink resource allocation and scheduling for OFDM-based broadband wireless networks. We present a cross-layer resource management framework leveraged by utility optimization. It includes utility-based resource management and QoS architecture, resource allocation algorithms, rate-based and delay-based multichannel scheduling that exploits wireless channel and queue information, and theoretical exploration of the fundamental mechanisms in wireless resource management, such as capacity, fairness, and stability. We also provide a solution that can efficiently allocate resources for heterogeneous traffic with diverse QoS requirements.

Journal ArticleDOI
W. C. Ng1
TL;DR: A dynamic programming-based heuristic to solve the scheduling problem and an algorithm to find lower bounds for benchmarking the schedules found by the heuristic are developed.

Proceedings ArticleDOI
05 Dec 2005
TL;DR: A generalized proportional fair (GPF) scheduling algorithm is presented, which allows tweaking the trade-off between fairness and throughput performance for best effort traffic in a cellular downlink scenario.
Abstract: In this paper, a generalized proportional fair (GPF) scheduling algorithm is presented, which allows tweaking the trade-off between fairness and throughput performance for best effort traffic in a cellular downlink scenario. The GPF is extended to frequency scheduling in an OFDMA system by performing dynamic channel allocation on a subband basis including link adaptation by adaptive modulation and coding. In this way, multiuser diversity can be utilized in time domain - as for CDMA - and in frequency domain. Compared to a system without frequency scheduling, this increases the system throughput and yields an improved fairness with respect to the allocated resources and with respect to the achieved data-rate per user. OFDMA system level simulations are carried out in order to analyze short/long-term fairness, multiuser diversity gain and system throughput of various GPF configurations with and without applying frequency scheduling.