scispace - formally typeset
Search or ask a question

Showing papers on "Scheduling (computing) published in 2001"


Journal ArticleDOI
TL;DR: The main features and the tuning of the algorithms for the direct solution of sparse linear systems on distributed memory computers developed in the context of a long term European research project are analyzed and discussed.
Abstract: In this paper, we analyze the main features and discuss the tuning of the algorithms for the direct solution of sparse linear systems on distributed memory computers developed in the context of a long term European research project. The algorithms use a multifrontal approach and are especially designed to cover a large class of problems. The problems can be symmetric positive definite, general symmetric, or unsymmetric matrices, both possibly rank deficient, and they can be provided by the user in several formats. The algorithms achieve high performance by exploiting parallelism coming from the sparsity in the problem and that available for dense matrices. The algorithms use a dynamic distributed task scheduling technique to accommodate numerical pivoting and to allow the migration of computational tasks to lightly loaded processors. Large computational tasks are divided into subtasks to enhance parallelism. Asynchronous communication is used throughout the solution process to efficiently overlap communication with computation. We illustrate our design choices by experimental results obtained on an SGI Origin 2000 and an IBM SP2 for test matrices provided by industrial partners in the PARASOL project.

2,066 citations


Book
06 Jul 2001
TL;DR: The application of Network Calculus to the Internet and basic Min-plus and Max-plus Calculus and Optimal Multimedia Smoothing and Adaptive and Packet Scale Rate Guarantees are studied.
Abstract: Network Calculus.- Application of Network Calculus to the Internet.- Basic Min-plus and Max-plus Calculus.- Min-plus and Max-plus System Theory.- Optimal Multimedia Smoothing.- FIFO Systems and Aggregate Scheduling.- Adaptive and Packet Scale Rate Guarantees.- Time Varying Shapers.- Systems with Losses.

1,666 citations


Patent
04 Apr 2001
TL;DR: In this article, a software scheduling agent is part of a probabilistic modeling system in which the scheduler operates to perform constrained random variation with selection, and the comparison and determination of which items of digital content should be offered for presentation to which users is performed by a process of constrained random variations.
Abstract: A method and apparatus wherein a software scheduling agent resides on a communication network and/or client device, such as location-aware wireless communication appliances, television set top boxes, or other end user client devices is disclosed. The software scheduling agent is part of a probabilistic modeling system in which the scheduler operates to perform constrained random variation with selection. Digital content is generated, organized, and stored on the communication network and/or the client devices. An electronic digital content wrapper, which holds information in the form of data and metadata related to the digital content is associated with each item of digital content. Contextual profiles for each user and each item of digital content are established by the users and the network and maintained by a service provider on the communication network. The software scheduling agent compares the contextual digital content profile for each item of digital content to the contextual user profile for each user to determine which digital content should be offered for presentation to each user. The comparison and determination of which items of digital content should be offered for presentation to which users is performed by a process of constrained random variation. After the software scheduling agent determines which items of digital content would most likely be relevant or interesting to the user, the digital content is transmitted, either in whole or in part, at predetermined times over the communication network to the appropriate client devices. The digital content is then stored, either in whole or in part, in cache memory on the client device until an appropriate time when the digital content is digitally packaged and presented to particular users over those user's client devices.

1,279 citations


Journal ArticleDOI
TL;DR: It is shown how scheduling algorithms exploiting asynchronous variations of channel quality can be used to maximize the channel capacity and maximize the number of users that can be supported with the desired QoS.
Abstract: We propose an efficient way to support quality of service of multiple real-time data users sharing a wireless channel. We show how scheduling algorithms exploiting asynchronous variations of channel quality can be used to maximize the channel capacity (i.e., maximize the number of users that can be supported with the desired QoS).

1,272 citations


Journal ArticleDOI
TL;DR: A unified tabu search heuristic for the vehicle routing problem with time windows and for two important generalizations: the periodic and the multi-depot vehicle routing problems with timewindows is presented.
Abstract: This paper presents a unified tabu search heuristic for the vehicle routing problem with time windows and for two important generalizations: the periodic and the multi-depot vehicle routing problems with time windows. The major benefits of the approach are its speed, simplicity and flexibility. The performance of the heuristic is assessed by comparing it to alternative methods on benchmark instances of the vehicle routing problem with time windows. Computational experiments are also reported on new randomly generated instances for each of the two generalizations.

857 citations


Journal ArticleDOI
TL;DR: In this article, it is demonstrated how dispensing with queues and dynamically scheduling control traffic improves closed-loop performance.
Abstract: The defining characteristic of a networked control system (NCS) is having one or more control loops closed via a serial communication channel Typically, when the words networking and control are used together, the focus is on the control of networks, but in this article our intent is nearly inverse, not control of networks but control through networks NCS design objectives revolve around the performance and stability of a target physical device rather than of the network The problem of stabilizing queue lengths, for example, is of secondary importance Integrating computer networks into control systems to replace the traditional point-to-point wiring has enormous advantages, including lower cost, reduced weight and power, simpler installation and maintenance, and higher reliability In this article, in addition to introducing networked control systems, we demonstrate how dispensing with queues and dynamically scheduling control traffic improves closed-loop performance

813 citations


Journal ArticleDOI
TL;DR: The study of backfilling to the accuracy of the runtime estimates provided by the users and a very surprising result is found: Backfilling actually works better when users overestimate the runtime by a substantial factor.
Abstract: Scheduling jobs on the IBM SP2 system and many other distributed-memory MPPs is usually done by giving each job a partition of the machine for its exclusive use. Allocating such partitions in the order in which the jobs arrive (FCFS scheduling) is fair and predictable, but suffers from severe fragmentation, leading to low utilization. This situation led to the development of the EASY scheduler which uses aggressive backfilling: Small jobs are moved ahead to fill in holes in the schedule, provided they do not delay the first job in the queue. We compare this approach with a more conservative approach in which small jobs move ahead only if they do not delay any job in the queue and show that the relative performance of the two schemes depends on the workload. For workloads typical on SP2 systems, the aggressive approach is indeed better, but, for other workloads, both algorithms are similar. In addition, we study the sensitivity of backfilling to the accuracy of the runtime estimates provided by the users and find a very surprising result. Backfilling actually works better when users overestimate the runtime by a substantial factor.

707 citations


Journal ArticleDOI
TL;DR: It is demonstrated via simulation results that the opportunistic transmission scheduling scheme is robust to estimation errors and also works well for nonstationary scenarios, resulting in performance improvements of 20%-150% compared with a scheduling scheme that does not take into account channel conditions.
Abstract: We present an "opportunistic" transmission scheduling policy that exploits time-varying channel conditions and maximizes the system performance stochastically under a certain resource allocation constraint. We establish the optimality of the scheduling scheme and also that every user experiences a performance improvement over any nonopportunistic scheduling policy when users have independent performance values. We demonstrate via simulation results that the scheme is robust to estimation errors and also works well for nonstationary scenarios, resulting in performance improvements of 20%-150% compared with a scheduling scheme that does not take into account channel conditions. Last, we discuss an extension of our opportunistic scheduling scheme to improve "short-term" performance.

652 citations


Journal ArticleDOI
TL;DR: Performance evaluation results demonstrate that the analytically tuned FCS algorithms provide robust transient and steady state performance guarantees for periodic and aperiodic tasks even when the task execution times vary by as much as 100% from the initial estimate.
Abstract: We develop Feedback Control real-time Scheduling (FCS) as a unified framework to provide Quality of Service (QoS) guarantees in unpredictable environments (such as e-business servers on the Internet). FCS includes four major components. First, novel scheduling architectures provide performance control to a new category of QoS critical systems that cannot be addressed by traditional open loop scheduling paradigms. Second, we derive dynamic models for computing systems for the purpose of performance control. These models provide a theoretical foundation for adaptive performance control. Third, we apply established control methodology to design scheduling algorithms with proven performance guarantees, which is in contrast with existing heuristics-based solutions relying on laborious design/tuning/testing iterations. Fourth, a set of control-based performance specifications characterizes the efficiency, accuracy, and robustness of QoS guarantees. The generality and strength of FCS are demonstrated by its instantiations in three important applications with significantly different characteristics. First, we develop real-time CPU scheduling algorithms that guarantees low deadline miss ratios in systems where task execution times may deviate from estimations at run-time. We solve the saturation problems of real-time CPU scheduling systems with a novel integrated control structure. Second, we develop an adaptive web server architecture to provide relative and absolute delay guarantees to different service classes with unpredictable workloads. The adaptive architecture has been implemented by modifying an Apache web server. Evaluation experiments on a testbed of networked Linux PC's demonstrate that our server provides robust relative/absolute delay guarantees despite of instantaneous changes in the user population. Third, we develop a data migration executor for networked storage systems that migrate data on-line while guaranteeing specified I/O throughput of concurrent applications.

642 citations


Book ChapterDOI
TL;DR: The main conclusion is that intelligent scheduling algorithms in conjunction with token based rate control provide an efficient framework for supporting a mixture of real-time and non-real-time data applications in a single carrier.
Abstract: High Data Rate (HDR) technology has recently been proposed as an overlay to CDMA as a means of providing packet data service to mobile users. In this paper, we study various scheduling algorithms for a mixture of real-time and non-real-time data over HDR/CDMA and compare their performance. We study the performance with respect to packet delays and also average throughput, where we use a token based mechanism to give minimum throughput guarantees. We find that a rule which we call the exponential rule performs well with regard to both these criteria. (In a companion paper, we show that this rule is throughput-optimal, i.e., it makes the queues stable if it is feasible to do so with any other scheduling rule). Our main conclusion is that intelligent scheduling algorithms in conjunction with token based rate control provide an efficient framework for supporting a mixture of real-time and non-real-time data applications in a single carrier.

443 citations


Journal ArticleDOI
TL;DR: This work presents a user-level thread scheduler for shared-memory multiprocessors, and it achieves linear speedup whenever P is small relative to the parallelism T1/T∈fty .
Abstract: We present a user-level thread scheduler for shared-memory multiprocessors, and we analyze its performance under multiprogramming We model multiprogramming with two scheduling levels: our scheduler runs at user-level and schedules threads onto a fixed collection of processes, while below this level, the operating system kernel schedules processes onto a fixed collection of processors We consider the kernel to be an adversary, and our goal is to schedule threads onto processes such that we make efficient use of whatever processor resources are provided by the kernel Our thread scheduler is a non-blocking implementation of the work-stealing algorithm For any multithreaded computation with work T 1 and critical-path length T ∈ fty , and for any number P of processes, our scheduler executes the computation in expected time O(T 1 /P A + T ∈ fty P/P A ) , where P A is the average number of processors allocated to the computation by the kernel This time bound is optimal to within a constant factor, and achieves linear speedup whenever P is small relative to the parallelism T 1 /T ∈ fty

Proceedings ArticleDOI
16 Jul 2001
TL;DR: Three types of collision-free channel access protocols for ad hoc networks are presented, derived from a novel approach to contention resolution that allows each node to elect deterministically one or multiple winners for channel access in a given contention context.
Abstract: Three types of collision-free channel access protocols for ad hoc networks are presented. These protocols are derived from a novel approach to contention resolution that allows each node to elect deterministically one or multiple winners for channel access in a given contention context (e.g., a time slot), given the identifiers of its neighbors one and two hops away. The new protocols are shown to be fair and capable of achieving maximum utilization of the channel bandwidth. The delay and throughput characteristics of the contention resolution algorithms are analyzed, and the performance of the three types of channel access protocols is studied by simulations.


Book ChapterDOI
16 Jun 2001
TL;DR: This paper focuses on three areas of Maui scheduling, specifically, backfill, job prioritization, and fairshare and briefly discusses the goals of each component, the issues and corresponding design decisions, and the algorithms enabling the Maui policies.
Abstract: The Maui scheduler has received wide acceptance in the HPC community as a highly configurable and effective batch scheduler It is currently in use on hundreds of SP, O2K, and Linux cluster systems throughout the world including a high percentage of the largest and most cutting edge research sites While the algorithms used within Maui have proven themselves effective, nothing has been published to date documenting these algorithms nor the configurable aspects they support This paper focuses on three areas of Maui scheduling, specifically, backfill, job prioritization, and fairshare It briefly discusses the goals of each component, the issues and corresponding design decisions, and the algorithms enabling the Maui policies It also covers the configurable aspects of each algorithm and the impact of various parameter selections

Journal ArticleDOI
TL;DR: A new single channel, time division multiple access (TDMA)-based broadcast scheduling protocol, termed the Five-Phase Reservation Protocol (FPRP), is presented for mobile ad hoc networks and shows that the protocol works very well in all three aspects.
Abstract: A new single channel, time division multiple access (TDMA)-based broadcast scheduling protocol, termed the Five-Phase Reservation Protocol (FPRP), is presented for mobile ad hoc networks. The protocol jointly and simultaneously performs the tasks of channel access and node broadcast scheduling. The protocol allows nodes to make reservations within TDMA broadcast schedules. It employs a contention-based mechanism with which nodes compete with each other to acquire TDMA slots. The FPRP is free of the "hidden terminal" problem, and is designed such that reservations can be made quickly and efficiently with negligible probability of conflict. It is fully-distributed and parallel (a reservation is made through a localized conversation between nodes in a 2-hop neighborhood), and is thus scalable. A "multihop ALOHA" policy is developed to support the FPRP. This policy uses a multihop, pseudo-Bayesian algorithm to calculate contention probabilities and enable faster convergence of the reservation procedure. The performance of the protocol, measured in terms of scheduling quality, scheduling overhead and robustness in the presence of nodal mobility, has been studied via simulations. The results showed that the protocol works very well in all three aspects. Some future work and applications are also discussed.

Journal ArticleDOI
TL;DR: This work investigates how a genetic algorithm can be employed to solve the dynamic load-balancing problem whereby optimal or near-optimal task allocations can "evolve" during the operation of the parallel computing system.
Abstract: Load-balancing problems arise in many applications, but, most importantly, they play a special role in the operation of parallel and distributed computing systems. Load-balancing deals with partitioning a program into smaller tasks that can be executed concurrently and mapping each of these tasks to a computational resource such as a processor (e.g., in a multiprocessor system) or a computer (e.g., in a computer network). By developing strategies that can map these tasks to processors in a way that balances out the load, the total processing time will be reduced with improved processor utilization. Most of the research on load-balancing focused on static scenarios that, in most of the cases, employ heuristic methods. However, genetic algorithms have gained immense popularity over the last few years as a robust and easily adaptable search technique. The work proposed here investigates how a genetic algorithm can be employed to solve the dynamic load-balancing problem. A dynamic load-balancing algorithm is developed whereby optimal or near-optimal task allocations can "evolve" during the operation of the parallel computing system. The algorithm considers other load-balancing issues such as threshold policies, information exchange criteria, and interprocessor communication. The effects of these and other issues on the success of the genetic-based load-balancing algorithm as compared with the first-fit heuristic are outlined.

Proceedings ArticleDOI
22 Apr 2001
TL;DR: A lazy online algorithm that varies transmission times according to backlog and is shown to be more energy efficient than a deterministic schedule that guarantees stability for the same range of arrival rates is devised.
Abstract: The paper considers the problem of minimizing the energy used to transmit packets over a wireless link via lazy schedules that judiciously vary packet transmission times. The problem is motivated by the following key observation: in many channel coding schemes, the energy required to transmit a packet can be significantly reduced by lowering the transmission power and transmitting the packet over a longer period of time. However, information is often time-critical or delay-sensitive and transmission times cannot be made arbitrarily long. We therefore consider packet transmission schedules that minimize energy subject to a deadline or a delay constraint. Specifically, we obtain an optimal offline schedule for a node operating under a deadline constraint. An inspection of the form of this schedule naturally leads us to an online schedule which is shown, through simulations, to be energy-efficient. Finally, we relax the deadline constraint and provide an exact probabilistic analysis of our offline scheduling algorithm. We then devise a lazy online algorithm that varies transmission times according to backlog and show that it is more energy efficient than a deterministic schedule that guarantees stability for the same range of arrival rates.

Journal ArticleDOI
01 Jan 2001
TL;DR: This paper provides a comprehensive and in-depth survey on recent research in wireless scheduling and examines representative algorithms that support quality of service differentiation and guarantees for wireless data networks.
Abstract: Scheduling algorithms that support quality of service (QoS) differentiation and guarantees for wireless data networks are crucial to the development of broadband wireless networks. Wireless communication poses special problems that do not exist in wireline networks, such as time-varying channel capacity and location-dependent errors. Although many mature scheduling algorithms are available for wireline networks, they are not directly applicable in wireless networks because of these special problems. This paper provides a comprehensive and in-depth survey on recent research in wireless scheduling. The problems and difficulties in wireless scheduling are discussed. Various representative algorithms are examined. Their themes of thoughts and pros and cons are compared and analyzed. At the end of the paper, some open questions and future research directions are addressed.

Journal ArticleDOI
TL;DR: It is shown in several examples that although the optimal schedule may be very different from that of the classical version of the problem, and the computational effort becomes significantly greater, polynomial-time solutions still exist.

Proceedings ArticleDOI
21 Oct 2001
TL;DR: This work proposes the anticipatory disk scheduling framework, a simple, general and transparent way, based on the non-work-conserving scheduling discipline, that is observed to yield large benefits on a range of microbenchmarks and real workloads.
Abstract: Disk schedulers in current operating systems are generally work-conserving, i.e., they schedule a request as soon as the previous request has finished. Such schedulers often require multiple outstanding requests from each process to meet system-level goals of performance and quality of service. Unfortunately, many common applications issue disk read requests in a synchronous manner, interspersing successive requests with short periods of computation. The scheduler chooses the next request too early; this induces deceptive idleness, a condition where the scheduler incorrectly assumes that the last request issuing process has no further requests, and becomes forced to switch to a request from another process.We propose the anticipatory disk scheduling framework to solve this problem in a simple, general and transparent way, based on the non-work-conserving scheduling discipline. Our FreeBSD implementation is observed to yield large benefits on a range of microbenchmarks and real workloads. The Apache webserver delivers between 29% and 71% more throughput on a disk-intensive workload. The Andrew filesystem benchmark runs faster by 8%, due to a speedup of 54% in its read-intensive phase. Variants of the TPC-B database benchmark exhibit improvements between 2% and 60%. Proportional-share schedulers are seen to achieve their contracts accurately and efficiently.

Proceedings ArticleDOI
30 May 2001
TL;DR: The architecture based on a feedback control loop that enforces desired relative delays among classes via dynamic connection scheduling and process reallocation and the use of feedback control theory to design the feedback loop with proven performance guarantees is presented.
Abstract: The paper presents the design, implementation, and evaluation of an adaptive architecture to provide relative delay guarantees for different service classes on Web servers under HTTP 1.1. The first contribution of the paper is the architecture based on a feedback control loop that enforces desired relative delays among classes via dynamic connection scheduling and process reallocation. The second contribution is our use of feedback control theory to design the feedback loop with proven performance guarantees. In contrast with ad hoc approaches that often rely on laborious tuning and design iterations, our control theory approach enables us to systematically design an adaptive Web server with established analytical methods. The design methodology includes using system identification to establish a dynamic model, and using the Root Locus method to design a feedback controller to satisfy performance specifications of a Web server. The adaptive architecture has been implemented by modifying an Apache Web server. Experimental results demonstrate that our adaptive server achieves robust relative delay guarantees even when workload varies significantly. Properties of our adaptive Web server include guaranteed stability, and satisfactory efficiency and accuracy in achieving the desired relative delay differentiation.

Journal ArticleDOI
TL;DR: The algorithmic techniques used in FF in comparison to hsp are described and their benefits in terms of run-time and solution-length behavior are evaluated.
Abstract: Fast-forward (FF) was the most successful automatic planner in the Fifth International Conference on Artificial Intelligence Planning and Scheduling (AIPS '00) planning systems competition. Like the well-known hsp system, FF relies on forward search in the state space, guided by a heuristic that estimates goal distances by ignoring delete lists. It differs from HSP in a number of important details. This article describes the algorithmic techniques used in FF in comparison to hsp and evaluates their benefits in terms of run-time and solution-length behavior.

Proceedings ArticleDOI
01 Jan 2001
TL;DR: This paper considers the problem of exploiting multiuser diversity in MIMO systems, especially those with zero forcing linear receivers and proposes a number of different scheduling disciplines and compares them in terms of average throughput as a function of the number of users and number of antennas.
Abstract: MIMO communication links, i.e. those with multiple transmit and receive antennas, offer significant advantages in terms of rate and reliability. In cellular systems, however, gains may be limited due to fading and interference. One potential solution is known as multiuser diversity, in which a packet scheduler improves throughput by exploiting the independence of the fading and interference statistics of different users. In this paper, we consider the problem of exploiting multiuser diversity in MIMO systems, especially those with zero forcing linear receivers. We propose a number of different scheduling disciplines and compare them in terms of average throughput as a function of the number of users and number of antennas.

Journal ArticleDOI
TL;DR: A parallel and easily implemented hybrid optimization framework is presented, which reasonably combines genetic algorithm with simulated annealing, and applies it to job-shop scheduling problems.

Proceedings ArticleDOI
Sem Borst1, Philip Whiting
28 Feb 2001
TL;DR: It is shown that the 'best' user may be identified as the maximum-rate user when the feasible rates are weighed with some appropriately determined coefficients, and the optimal strategy may be viewed as a revenue-based policy.
Abstract: The relative delay tolerance of data applications, together with the bursty traffic characteristics, opens up the possibility for scheduling transmissions so as to optimize throughput. A particularly attractive approach, in fading environments, is to exploit the variations in the channel conditions, and transmit to the user with the currently 'best' channel. We show that the 'best' user may be identified as the maximum-rate user when the feasible rates are weighed with some appropriately determined coefficients. Interpreting the coefficients as shadow prices, or reward values, the optimal strategy may thus be viewed as a revenue-based policy. Calculating the optimal revenue vector directly is a formidable task, requiring detailed information on the channel statistics. Instead, we present adaptive algorithms for determining the optimal revenue vector on-line in an iterative fashion, without the need for explicit knowledge of the channel behavior. Starting from an arbitrary initial vector, the algorithms iteratively adjust the reward values to compensate for observed deviations from the target throughput ratios. The algorithms are validated through extensive numerical experiments. Besides verifying long-run convergence, we also examine the transient performance, in particular the rate of convergence to the optimal revenue vector. The results show that the target throughput ratios are tightly maintained, and that the algorithms are well able to track changes in the channel conditions or throughput targets.

Proceedings ArticleDOI
Flavius Gruian1
06 Aug 2001
TL;DR: In this article, the authors propose an approach to reduce energy consumption of hard real-time tasks with fixed priorities assigned in a rate-monotonic or deadline-constrained manner.
Abstract: Addresses scheduling for reduced energy of hard real-time tasks with fixed priorities assigned in a rate monotonic or deadline monotonic manner. The approach described can be exclusively implemented in the RTOS. It targets energy consumption reduction by using both on-line and off-line decisions, taken both at task level and at task-set level. We consider sets of independent tasks running on processors with dynamic voltage supplies (DVS). Taking into account the real behavior of a realtime system, which is often better than the worst case, our methods employ stochastic data to derive energy efficient schedules. The experimental results show that our approach achieves more important energy reductions than other policies from the same class.

Proceedings ArticleDOI
03 Dec 2001
TL;DR: A fast and simple algorithm for sharing resources in multiprocessor systems, together with an innovative procedure for assigning preemption thresholds to tasks that allows to guarantee the schedulability of hard real-time task sets while minimizing RAM usage.
Abstract: The research on real-time software systems has produced algorithms that allow to effectively schedule system resources while guaranteeing the deadlines of the application and to group tasks in a very short number of non-preemptive sets which require much less RAM memory for stack. Unfortunately, up to now the research focus has been on time guarantees rather than the optimization of RAM usage. Furthermore, these techniques do not apply to multiprocessor architectures which are likely to be widely used in future microcontrollers. This paper presents a fast and simple algorithm for sharing resources in multiprocessor systems, together with an innovative procedure for assigning preemption thresholds to tasks. This allows to guarantee the schedulability of hard real-time task sets while minimizing RAM usage. The experimental part shows the effectiveness of a simulated annealing-based tool that allows to find a near-optimal task allocation. When used in conjunction with our preemption threshold assignment algorithm, our tool further reduces the RAM usage in multiprocessor systems.

Journal ArticleDOI
TL;DR: This paper develops a tabu search algorithm which integrates some important features including an efficient neighborhood, a dynamic tabu tenure mechanism, techniques for constraint handling, intensification and diversification, and large numbers of binary and ternary “logical” constraints.
Abstract: The daily photograph scheduling problem of earth observation satellites such as Spot 5 consists of scheduling a subset of mono or stereo photographs from a given set of candidates to different cameras. The scheduling must maximize a profit function while satisfying a large number of constraints. In this paper, we first present a formulation of the problem as a generalized version of the well-known knapsack model, which includes large numbers of binary and ternary “logical” constraints. We then develop a tabu search algorithm which integrates some important features including an efficient neighborhood, a dynamic tabu tenure mechanism, techniques for constraint handling, intensification and diversification. Extensive experiments on a set of large and realistic benchmark instances show the effectiveness of this approach.

Patent
05 Jan 2001
TL;DR: In this paper, the authors propose a data flow operator (DFO) to dynamically partition row sources for parallel processing, based on the ability to parallelize a row source, the partitioning requirements of consecutive row sources and the entire row source tree.
Abstract: The present invention implements parallel processing in a Database Management System. The present invention provides the ability to locate transaction and recovery information at one location and eliminates the need for read locks and two-phased commits. The present invention provides the ability to dynamically partition row sources for parallel processing. Parallelism is based on the ability to parallelize a row source, the partitioning requirements of consecutive row sources and the entire row source tree, and any specification in the SQL statement. A Query Coordinator assumes control of the processing of a entire query and can execute serial row sources. Additional threads of control, Query Server, execute a parallel operators. Parallel operators are called data flow operators (DFOs). A DFO is represented as structured query language (SQL) statements and can be executed concurrently by multiple processes, or query slaves. A central scheduling mechanism, a data flow scheduler, controls a parallelized portion of an execution plan, and can become invisible for serial execution. Table queues are used to partition and transport rows between sets of processes. Node linkages provide the ability to divide the plan into independent lists that can each be executed by a set of query slaves. The present invention maintains a bit vector that is used by a subsequent producer to determine whether any rows need to be produced to its consumers. The present uses states and a count of the slaves that have reached these states to perform its scheduling tasks.

Journal ArticleDOI
TL;DR: A generation scheme for precedence constraints that achieves a target density which is uniform in the precedence constraint graph and a generation scheme that explicitly considers the correlation of routings in a job shop is presented.
Abstract: The operations research literature provides little guidance about how data should be generated for the computational testing of algorithms or heuristic procedures. We discuss several widely used data generation schemes, and demonstrate that they may introduce biases into computational results. Moreover, such schemes are often not representative of the way data arises in practical situations. We address these deficiencies by describing several principles for data generation and several properties that are desirable in a generation scheme. This enables us to provide specific proposals for the generation of a variety of machine scheduling problems. We present a generation scheme for precedence constraints that achieves a target density which is uniform in the precedence constraint graph. We also present a generation scheme that explicitly considers the correlation of routings in a job shop. We identify several related issues that may influence the design of a data generation scheme. Finally, two case studies illustrate, for specific scheduling problems, how our proposals can be implemented to design a data generation scheme.