scispace - formally typeset
Search or ask a question

Showing papers on "Scheduling (computing) published in 2000"


Proceedings ArticleDOI
14 May 2000
TL;DR: The proposed Nimrod/G grid-enabled resource management and scheduling system builds on the earlier work on Nimrod and follows a modular and component-based architecture enabling extensibility, portability, ease of development, and interoperability of independently developed components.
Abstract: The availability of powerful microprocessors and high-speed networks as commodity components has enabled high-performance computing on distributed systems (wide-area cluster computing). In this environment, as the resources are usually distributed geographically at various levels (department, enterprise or worldwide), there is a great challenge in integrating, coordinating and presenting them as a single resource to the user, thus forming a computational grid. Another challenge comes from the distributed ownership of resources, with each resource having its own access policy, cost and mechanism. The proposed Nimrod/G grid-enabled resource management and scheduling system builds on our earlier work on Nimrod (D. Abramson et al., 1994, 1995, 1997, 2000) and follows a modular and component-based architecture enabling extensibility, portability, ease of development, and interoperability of independently developed components. It uses the GUSTO (GlobUS TOolkit) services and can be easily extended to operate with any other emerging grid middleware services. It focuses on the management and scheduling of computations over dynamic resources scattered geographically across the Internet at department, enterprise or global levels, with particular emphasis on developing scheduling schemes based on the concept of computational economy for a real testbed, namely the Globus testbed (GUSTO).

965 citations


Journal ArticleDOI
TL;DR: The basic concept of OBS is described and a general architecture of optical core routers and electronic edge routers in the OBS network is presented and a nonperiodic time-interval burst assembly mechanism is described.
Abstract: Optical burst switching (OBS) is a promising solution for building terabit optical routers and realizing IP over WDM. In this paper, we describe the basic concept of OBS and present a general architecture of optical core routers and electronic edge routers in the OBS network. The key design issues related to the OBS are also discussed, namely, burst assembly (burstification), channel scheduling, burst offset-time management, and some dimensioning rules. A nonperiodic time-interval burst assembly mechanism is described. A class of data channel scheduling algorithms with void filling is proposed for optical routers using a fiber delay line buffer. The LAUC-VF (latest available unused channel with void filling) channel scheduling algorithm is studied in detail. Initial results on the burst traffic characteristics and on the performance of optical routers in the OBS network with self-similar traffic as inputs are reported in the paper.

961 citations


Journal ArticleDOI
TL;DR: This paper reviews the literature on scheduling with batching, giving details of the basic algorithms, and referencing other significant results about efficient dynamic programming algorithms for solving batching problems.

904 citations


Proceedings ArticleDOI
01 May 2000
TL;DR: This study proposes an adaptive scheduling algorithm for parameter sweep applications on the grid, modify standard heuristics for task/host assignment in perfectly predictable environments, and proposes an extension of Sufferage called XSufferage.
Abstract: The computational grid provides a promising platform for the efficient execution of parameter sweep applications over very large parameter spaces. Scheduling such applications is challenging because target resources are heterogeneous, because their load and availability varies dynamically, and because independent tasks may share common data files. We propose an adaptive scheduling algorithm for parameter sweep applications on the grid. We modify standard heuristics for task/host assignment in perfectly predictable environments (max-min, min-min, Sufferage), and we propose an extension of Sufferage called XSufferage. Using simulation, we demonstrate that XSufferage can take advantage of file sharing to achieve better performance than the other heuristics. We also study the impact of inaccurate performance prediction on scheduling. Our study shows that: different heuristics behave differently when predictions are inaccurate; and an increased adaptivity leads to better performance.

625 citations


Journal ArticleDOI
12 Nov 2000
TL;DR: It is demonstrated that performance on a hardware multithreaded processor is sensitive to the set of jobs that are coscheduled by the operating system jobscheduler, and that a small sample of the possible schedules is sufficient to identify a good schedule quickly.
Abstract: Simultaneous Multithreading machines fetch and execute instructions from multiple instruction streams to increase system utilization and speedup the execution of jobs. When there are more jobs in the system than there is hardware to support simultaneous execution, the operating system scheduler must choose the set of jobs to coscheduleThis paper demonstrates that performance on a hardware multithreaded processor is sensitive to the set of jobs that are coscheduled by the operating system jobscheduler. Thus, the full benefits of SMT hardware can only be achieved if the scheduler is aware of thread interactions. Here, a mechanism is presented that allows the scheduler to significantly raise the performance of SMT architectures. This is done without any advance knowledge of a workload's characteristics, using sampling to identify jobs which run well together.We demonstrate an SMT jobscheduler called SOS. SOS combines an overhead-free sample phase which collects information about various possible schedules, and a symbiosis phase which uses that information to predict which schedule will provide the best performance. We show that a small sample of the possible schedules is sufficient to identify a good schedule quickly. On a system with random job arrivals and departures, response time is improved as much as 17% over a schedule which does not incorporate symbiosis.

619 citations


Journal ArticleDOI
TL;DR: The performance of an on-line scheduler is best-effort real time scheduling can be significantly improved if the system is designed in such a way that the laxity of every job is proportional to its length.
Abstract: We introduce resource augmentation as a method for analyzing online scheduling problems. In resource augmentation analysis the on-line scheduler is given more resources, say faster processors or more processors, than the adversary. We apply this analysis to two well-known on-line scheduling problems, the classic uniprocessor CPU scheduling problem 1 |ri, pmtn|S Fi, and the best-effort firm real-time scheduling problem 1|ri, pmtn| S wi( 1- Ui). It is known that there are no constant competitive nonclairvoyant on-line algorithms for these problems. We show that there are simple on-line scheduling algorithms for these problems that are constant competitive if the online scheduler is equipped with a slightly faster processor than the adversary. Thus, a moderate increase in processor speed effectively gives the on-line scheduler the power of clairvoyance. Furthermore, the on-line scheduler can be constant competitive on all inputs that are not closely correlated with processor speed. We also show that the performance of an on-line scheduler is best-effort real time scheduling can be significantly improved if the system is designed in such a way that the laxity of every job is proportional to its length.

477 citations


Journal ArticleDOI
TL;DR: Results related to deterministic scheduling problems where machines are not continuously available for processing where intractability results, polynomial optimization and approximation algorithms are reviewed.

473 citations


Proceedings ArticleDOI
26 Mar 2000
TL;DR: A simple packet dropping scheme, called CHOKe, that discriminates against the flows which submit more packets per second than is allowed by their fair share, which aims to approximate the fair queueing policy.
Abstract: We investigate the problem of providing a fair bandwidth allocation to each of n flows that share the outgoing link of a congested router. The buffer at the outgoing link is a simple FIFO, shared by packets belonging to the n flows. We devise a simple packet dropping scheme, called CHOKe, that discriminates against the flows which submit more packets per second than is allowed by their fair share. By doing this, the scheme aims to approximate the fair queueing policy. Since it is stateless and easy to implement, CHOKe controls unresponsive or misbehaving flows with a minimum overhead.

452 citations


Journal ArticleDOI
TL;DR: Simulations are reported which demonstrate that the model is capable of organised sequential action selection in a complex naturalistic domain and it is demonstrated that, after lesioning, the model exhibits behaviour qualitatively equivalent to that observed by Schwartz et al., in their action disorganisation syndrome patients.
Abstract: The control of routine action is a complex process subject both to minor lapses in normals and to more severe breakdown following certain forms of neurological damage. A number of recent empirical studies (e.g. Humphreys & Ford, 1998; Schwartz et al., 1991, 1995, 1998) have examined the details of breakdown in certain classes of patient, and attempted to relate the findings to existing psychological theory. This paper complements those studies by presenting a computational model of the selection of routine actions based on competitive activation within a hierarchically organised network of action schemas (cf. Norman & Shallice, 1980, 1986). Simulations are reported which demonstrate that the model is capable of organised sequential action selection in a complex naturalistic domain. It is further demonstrated that, after lesioning, the model exhibits behaviour qualitatively equivalent to that observed by Schwartz et al., in their action disorganisation syndrome patients.

448 citations


Book
01 Jan 2000
TL;DR: The journal Real-Time Systems publishes papers, short papers and correspondence articles that concentrate on real-time computing principles and applications, including requirements engineering, specification and verification techniques, design methods and tools, programming languages, operating systems, scheduling algorithms, architecture, hardware and interfacing.
Abstract: From the Publisher: Real-Time Systems is both a valuable reference for professionals and an advanced text for Computer Science and Computer Engineering students. Real world real-time applications based on research and practice State-of-the-art algorithms and methods for validation Methods for end-to-end scheduling and resource management More than 100 illustrations to enhance understanding Comprehensive treatment of the technology known as RMA (rate-monotonic analysis) methods A supplemental Companion Website www.prenhall.com/liu

435 citations


Proceedings ArticleDOI
01 Nov 2000
TL;DR: This paper describes a user-level Grid middleware project, the AppLeS Parameter Sweep Template (APST), that uses application-level scheduling techniques and various Grid technologies to allow the efficient deployment of parameter sweep applications over the Grid.
Abstract: The Computational Grid is a promising platform for the efficient execution of parameter sweep applications over large parameter spaces. To achieve performance on the Grid, such applications must be scheduled so that shared data files are strategically placed to maximize reuse, and so that the application execution can adapt to the deliverable performance potential of target heterogeneous, distributed and shared resources. Parameter sweep applications are an important class of applications and would greatly benefit from the development of Grid middleware that embeds a scheduler for performance and targets Grid resources transparently. In this paper we describe a user-level Grid middleware project, the AppLeS Parameter Sweep Template (APST), that uses application-level scheduling techniques [1] and various Grid technologies to allow the efficient deployment of parameter sweep applications over the Grid. We discuss several possible scheduling algorithms and detail our software design. We then describe our current implementation of APST using systems like Globus [2], NetSolve [3] and the Network Weather Service [4], and present experimental results.

Proceedings ArticleDOI
17 Sep 2000
TL;DR: This work evaluates the energy usage of each thread and throttles the system activity so that the scheduling goal is achieved, and shows that the correlation of events and energy values provides the necessary information for energy-aware scheduling policies.
Abstract: A prerequisite of energy-aware scheduling is precise knowledge of any activity inside the computer system. Embedded hardware monitors (e.g., processor performance counters) have proved to offer valuable information in the field of performance analysis. The same approach can be applied to investigate the energy usage patterns of individual threads. We use information about active hardware units (e.g., integer/floating-point unit, cache/memory interface) gathered by event counters to establish a thread-specific energy accounting. The evaluation shows that the correlation of events and energy values provides the necessary information for energy-aware scheduling policies.Our approach to OS-directed power management adds the energy usage pattern to the runtime context of a thread. Depending on the field of application we present two scenarios that benefit from applying energy usage patterns: Workstations with passive cooling on the one hand and battery-powered mobile systems on the other hand.Energy-aware scheduling evaluates the energy usage of each thread and throttles the system activity so that the scheduling goal is achieved. In workstations we throttle the system if the average energy use exceeds a predefined power-dissipation capacity. This makes a compact, noiseless and affordable system design possible that meets sporadic yet high demands in computing power. Nowadays, more and more mobile systems offer the features of reducible clock speed and dynamic voltage scaling. Energy-aware scheduling can employ these features to yield a longer battery life by slowing down low-priority threads while preserving a certain quality of service.

Proceedings Article
01 Jan 2000
TL;DR: This paper attempts to address the scheduling of jobs to the geographically distributed computing resources with a brief description of the three nature's heuristics namely Genetic Algorithm, Simulated Annealing and Tabu Search.
Abstract: Computational Grid (Grid Computing) is a new paradigm that will drive the computing arena in the new millennium. Unification of globally remote and diverse resources, coupled with the increasing computational needs for Grand Challenge Applications (GCA) and accelerated growth of the Internet and communication technology will further fuel the development of global computational power grids. In this paper, we attempt to address the scheduling of jobs to the geographically distributed computing resources. Conventional wisdom in the field of scheduling is that scheduling problems exhibit such richness and variety that no single scheduling method is sufficient. Heuristics derived from the nature has demonstrated a surprising degree of effectiveness and generality for handling combinatorial optimization problems. This paper begins with an introduction of computational grids followed by a brief description of the three nature's heuristics namely Genetic Algorithm (GA), Simulated Annealing (SA) and Tabu Search (TS). Experimental results using GA are included. We further demonstrate the hybridized usage of the above algorithms that can be applied in a computational grid environment for job scheduling.

Proceedings ArticleDOI
TL;DR: Nimrod/G as mentioned in this paper is a grid-enabled resource management and scheduling system that follows a modular and component-based architecture enabling extensibility, portability, ease of development, and interoperability of independently developed components.
Abstract: The availability of powerful microprocessors and high-speed networks as commodity components has enabled high performance computing on distributed systems (wide-area cluster computing). In this environment, as the resources are usually distributed geographically at various levels (department, enterprise, or worldwide) there is a great challenge in integrating, coordinating and presenting them as a single resource to the user; thus forming a computational grid. Another challenge comes from the distributed ownership of resources with each resource having its own access policy, cost, and mechanism. The proposed Nimrod/G grid-enabled resource management and scheduling system builds on our earlier work on Nimrod and follows a modular and component-based architecture enabling extensibility, portability, ease of development, and interoperability of independently developed components. It uses the Globus toolkit services and can be easily extended to operate with any other emerging grid middleware services. It focuses on the management and scheduling of computations over dynamic resources scattered geographically across the Internet at department, enterprise, or global level with particular emphasis on developing scheduling schemes based on the concept of computational economy for a real test bed, namely, the Globus testbed (GUSTO).

Journal ArticleDOI
TL;DR: The model, solution method, and system developed and implemented for hot rolling production scheduling in Shanghai Baoshan Iron & Steel Complex shows 20% improvement over the previous manual based system.

Proceedings ArticleDOI
22 Oct 2000
TL;DR: This paper investigates clock scaling algorithms on the Itsy, an experimental pocket computer that runs a complete, functional multitasking operating system and concludes that currently proposed algorithms consistently fail to achieve their goal of saving power while not causing user applications to change their interactive behavior.
Abstract: Pocket computers are beginning to emerge that provide sufficient processing capability and memory capacity to run traditional desktop applications and operating systems on them. The increasing demand placed on these systems by software is competing against the continuing trend in the design of low-power microprocessors towards increasing the amount of computation per unit of energy. Consequently, in spite of advances in low-power circuit design, the microprocessor is likely to continue to account for a significant portion of the overall power consumption of pocket computers.This paper investigates clock scaling algorithms on the Itsy, an experimental pocket computer that runs a complete, functional multitasking operating system (a version of Linux 2.0.30). We implemented a number of clock scaling algorithms that are used to adjust the processor speed to reduce the power used by the processor. After testing these algorithms, we conclude that currently proposed algorithms consistently fail to achieve their goal of saving power while not causing user applications to change their interactive behavior.

Journal ArticleDOI
TL;DR: A volume-dependent piecewise linear processing time function is used to model the learning effects and it is shown that the problem is NP-hard in the strong sense and two special cases which are polynomially solvable are identified.
Abstract: In this paper we study a single machine scheduling problem in which the job processing times will decrease as a result of learning. A volume-dependent piecewise linear processing time function is used to model the learning effects. The objective is to minimize the maximum lateness. We first show that the problem is NP-hard in the strong sense and then identify two special cases which are polynomially solvable. We also propose two heuristics and analyse their worst-case performance.

Patent
29 Feb 2000
TL;DR: In this article, a system for scheduling a conference between callers includes a database that stores scheduling information indicating at least a start time, a duration, and a maximum number of callers for one or more scheduled conferences, the scheduling information reflecting available conferencing resources.
Abstract: A system for scheduling a conference between callers includes a database that stores scheduling information indicating at least a start time, a duration, and a maximum number of callers for one or more scheduled conferences, the scheduling information reflecting available conferencing resources. A server complex coupled to the database communicates, to a requesting Internet Protocol (IP) user, at least one page including one or more scheduling input fields. The server complex receives scheduling input from the requesting IP user for a requested conference according to the scheduling input fields. The server complex accesses the database to determine, according to the scheduling input, whether sufficient resources are available for the requested conference. If so, the server complex allocates at least some available resources to the requested conference and generates confirmations of the requested conference for communication to the callers.

Proceedings ArticleDOI
01 May 2000
TL;DR: In this paper, the authors propose and evaluate several algorithms for supporting advanced reservation of resources in supercomputing scheduling systems, which allow users to request resources from scheduling systems at specific times.
Abstract: Some computational grid applications have very large resource requirements and need simultaneous access to resources from more than one parallel computer. Current scheduling systems do not provide mechanisms to gain such simultaneous access without the help of human administrators of the computer systems. In this work, we propose and evaluate several algorithms for supporting advanced reservation of resources in supercomputing scheduling systems. These advanced reservations allow users to request resources from scheduling systems at specific times. We find that the wait times of applications submitted to the queue increases when reservations are supported and the increase depends on how reservations are supported. Further, we find that the best performance is achieved when we assume that applications can be terminated and restarted, backfilling is performed, and relatively accurate run-time predictions are used.

Book ChapterDOI
17 Dec 2000
TL;DR: In this paper, the authors discuss typical scheduling structures that occur in computational grids and the selection strategies applicable to these structures are introduced and classified, and simulations are used to evaluate these aspects considering combinations of different job and machine models.
Abstract: In this paper, we discuss typical scheduling structures that occur in computational grids. Scheduling algorithms and selection strategies applicable to these structures are introduced and classified. Simulations were used to evaluate these aspects considering combinations of different Job and Machine Models. Some of the results are presented in this paper and are discussed in qualitative and quantitative way. For hierarchical scheduling, a common scheduling structure, the simulation results confirmed the benefit of Backfill. Unexpected results were achieved as FCFS proves to perform better than Backfill when using a central job-pool.

Patent
09 Jun 2000
TL;DR: In this paper, a technique for allocating bandwidth in a digital broadband delivery system (DBDS) using a bandwidth allocation manager to dynamically assign a content delivery mode to a plurality of digital transmission channels based on an allocation criteria received from a subscriber is disclosed.
Abstract: A technique for allocating bandwidth in a digital broadband delivery system (DBDS) using a bandwidth allocation manager to dynamically assign a content delivery mode to a plurality of digital transmission channels based on an allocation criteria received from a subscriber is disclosed herein. The bandwidth allocation manager determines a bandwidth allocation schedule for a predetermined bandwidth based on allocation criteria comprising a criteria received from a subscriber. The allocation criteria received from the subscriber may comprise a subscriber reservation request which is processed by the bandwidth allocation manager to determine the bandwidth allocation schedule.

Journal ArticleDOI
TL;DR: A fast and simple priority dispatch method is described and shown to produce acceptable schedules most of the time and a look ahead algorithm is introduced that outperforms the dispatcher by about 12% with only a small increase in run time.
Abstract: This paper describes three approaches to assigning tasks to earth observing satellites EOS. A fast and simple priority dispatch method is described and shown to produce acceptable schedules most of the time. A look ahead algorithm is then introduced that outperforms the dispatcher by about 12% with only a small increase in run time. These algorithms set the stage for the introduction of a genetic algorithm that uses job permutations as the population. The genetic approach presented here is novel in that it uses two additional binary variables, one to allow the dispatcher to occasionally skip a job in the queue and another to allow the dispatcher to occasionally allocate the worst position to the job. These variables are included in the recombination step in a natural way. The resulting schedules improve on the look ahead by as much as 15% at times and 3% on average. We define and use the "window-constrained packing" problem to model the bare bones of the EOS scheduling problem.

Book ChapterDOI
TL;DR: In this article, the authors derived an upper bound on the queuing delay as a function of priority traffic utilization and the maximum hop count of any flow, and the shaping parameters at the network ingress.
Abstract: A large number of products implementing aggregate buffering and scheduling mechanisms have been developed and deployed, and still more are under development. With the rapid increase in the demand for reliable end-to-end QoS solutions, it becomes increasingly important to understand the implications of aggregate scheduling on the resulting QoS capabilities. This paper studies the bounds on the worst case delay in a network implementing aggregate scheduling. We derive an upper bound on the queuing delay as a function of priority traffic utilization and the maximum hop count of any flow, and the shaping parameters at the network ingress. Our bound explodes at a certain utilization level which is a function of the hop count. We show that for a general network configuration and larger utilization utilization an upper bound on delay, if it exists, must be a function of the number of nodes and/or the number of flows in the network.

Journal ArticleDOI
TL;DR: An intelligent agent based dynamic scheduling system that selects the most appropriate priority rule according to the shop conditions in real time, while simulated environment performs scheduling activities using the rule selected by the agent.

Patent
25 Feb 2000
TL;DR: In this article, a method and apparatus for the maintenance of mechanized equipment such as an automobile is disclosed, which includes a notification system such as email system, for notifying of, scheduling, and/or paying for services.
Abstract: A method and apparatus for the maintenance of mechanized equipment such as an automobile is disclosed. Various sensors located within the automobile provide information to an on-board computing device, a personal digital assistant, or a local computing device which are networkable to a network such as the Internet. The information may be transferred across the network, and service obtained appropriately. Information located in various remote servers relating to the performance and service of the vehicle may be downloaded across the network and easily used in servicing and maintaining the vehicle. Optionally, the apparatus includes a notification system, such as an email system, for notifying of, scheduling, and/or paying for services.

Journal ArticleDOI
TL;DR: The several scheduling policies under machine breakdowns in a classical job shop system are tested and a partial scheduling scheme under both deterministic and stochastic environments for several system configurations are investigated.

Proceedings ArticleDOI
19 Jun 2000
TL;DR: Average job response times may be much lower under ERfair scheduling than under Pfair scheduling, particularly in lightly loaded systems, and run-time costs are lower underERfair scheduling.
Abstract: Presents a variant of Pfair scheduling (S. Baruah et al., 1995, 1996), which we call early-release fair (ERfair) scheduling. Like conventional Pfair scheduling, ERfair scheduling algorithms can be applied to optimally schedule periodic tasks on a multiprocessor system in polynomial time. However, ERfair scheduling differs from Pfair scheduling in that it is work-conserving. As a result, average job response times may be much lower under ERfair scheduling than under Pfair scheduling, particularly in lightly loaded systems. In addition, run-time costs are lower under ERfair scheduling.

Journal ArticleDOI
TL;DR: This paper surveys cyclic scheduling problems in robotic flowshops, models for such problems, and the complexity of solving these problems, thereby bringing together several streams of research that have by and large ignored one another and describing and establishing links with other scheduling problems and combinatorial topics.
Abstract: Fully automated production cells consisting of flexible machines and a material handling robot have become commonplace in contemporary manufacturing systems. Much research on scheduling problems arising in such cells, in particular in flowshop-like production cells, has been reported recently. Although there are many differences between the models, they all explicitly incorporate the interaction between the materials handling and the classical job processing decisions, since this interaction determines the efficiency of the cell. This paper surveys cyclic scheduling problems in robotic flowshops, models for such problems, and the complexity of solving these problems, thereby bringing together several streams of research that have by and large ignored one another, and describing and establishing links with other scheduling problems and combinatorial topics.

Proceedings ArticleDOI
01 Aug 2000
TL;DR: Dynamic Voltage Scaling (DVS) as discussed by the authors allows a device to reduce energy consumption by lowering its processor speed at run-time, allowing a corresponding reduction in processor voltage and energy.
Abstract: Microprocessors represent a significant portion of the energy con?sumed in portable electronic devices. Dynamic Voltage Scaling (DVS) allows a device to reduce energy consumption by lowering its processor speed at run-time, allowing a corresponding reduction in processor voltage and energy. A voltage scheduler determines the appropriate operating voltage by analyzing application con?straints and requirements. A complete software implementation, including both applications and the underlying operating system, shows that DVS is effective at reducing the energy consumed with?out requiring extensive software modification.

Journal ArticleDOI
TL;DR: A mathematical model, based on the just-in-time (JIT) idea, for solving machine conflicts in steelmaking-continuous casting production scheduling in the computer integrated manufacturing system (CIMS) environment is presented.