scispace - formally typeset
Search or ask a question

Showing papers on "Scheduling (computing) published in 1992"


Journal ArticleDOI
TL;DR: The stability of a queueing network with interdependent servers is considered and a policy is obtained which is optimal in the sense that its Stability Region is a superset of the stability region of every other scheduling policy, and this stability region is characterized.
Abstract: The stability of a queueing network with interdependent servers is considered. The dependency among the servers is described by the definition of their subsets that can be activated simultaneously. Multihop radio networks provide a motivation for the consideration of this system. The problem of scheduling the server activation under the constraints imposed by the dependency among servers is studied. The performance criterion of a scheduling policy is its throughput that is characterized by its stability region, that is, the set of vectors of arrival and service rates for which the system is stable. A policy is obtained which is optimal in the sense that its stability region is a superset of the stability region of every other scheduling policy, and this stability region is characterized. The behavior of the network is studied for arrival rates that lie outside the stability region. Implications of the results in certain types of concurrent database and parallel processing systems are discussed. >

3,018 citations


Journal ArticleDOI
TL;DR: Problems requiring large amounts of computer time using existing approaches for solving this problem type are rapidly solved with the procedure using the dominance rules described, resulting in a significant reduction in the variability in solution times.
Abstract: In this paper a branch-and-bound procedure is described for scheduling the activities of a project of the PERT/CPM variety subject to precedence and resource constraints where the objective is to minimize project duration. The procedure is based on a depth-first solution strategy in which nodes in the solution tree represent resource and precedence feasible partial schedules. Branches emanating from a parent node correspond to exhaustive and minimal combinations of activities, the delay of which resolves resource conflicts at each parent node. Precedence and resource-based bounds described in the paper are combined with new dominance pruning rules to rapidly fathom major portions of the solution tree. The procedure is programmed in the C language for use on both a mainframe and a personal computer. The procedure has been validated using a standard set of test problems with between 7 and 50 activities requiring up to three resource types each. Computational experience on a personal computer indicates that ...

612 citations


Posted Content
01 Jan 1992
TL;DR: It is shown that hard instances, being far more smaller in size than presumed in the literature, may not be solved to optimality even within a large amount of computation time.
Abstract: The paper describes an algorithm for the generation of a general class of precedence- and resource-constrained scheduling problems. Easy and hard instances for the single- and multi-mode resource-constrained project scheduling problem are benchmarked by using the state of the art (branch-and-bound-) procedures. The strong impact of the chosen parametric characterization of the problems is shown via an in-depth computational study. The results provided, demonstrate that the classical benchmark instances used by several researchers over decades belong to the subset of the very easy ones. In addition it is shown that hard instances, being far more smaller in size than presumed in the literature, may not be solved to optimality even within a huge amount of computational time.

609 citations


Journal ArticleDOI
TL;DR: In this paper, search heuristics are developed for generic sequencing problems with emphasis on job shop scheduling, and two methods are proposed, both of which are based on novel definitions of solution spaces and of neighborhoods in these spaces.
Abstract: In this paper search heuristics are developed for generic sequencing problems with emphasis on job shop scheduling. The proposed methods integrate problem specific heuristics common to Operations Research and local search approaches from Artificial Intelligence in order to obtain desirable properties from both. The applicability of local search to a wide range of problems, and the incorporation of problem-specific information are both properties of the proposed algorithms. Two methods are proposed, both of which are based on novel definitions of solution spaces and of neighborhoods in these spaces. Applications of the proposed methodology are developed for job shop scheduling problems, and can be easily applied with any scheduling objective. To demonstrate effectiveness, the method is tested on the job shop scheduling problem with the minimum makespan objective. Encouraging results are obtained.

520 citations


Journal ArticleDOI
TL;DR: In this article, the problem of scheduling semiconductor burn-in operations is modeled as batch processing machines, where the processing time of a batch is equal to the largest processing time among all jobs in the batch.
Abstract: In this paper, we study the problem of scheduling semiconductor burn-in operations, where burn-in ovens are modeled as batch processing machines. A batch processing machine is one that can process up to B jobs simultaneously. The processing time of a batch is equal to the largest processing time among all jobs in the batch. We present efficient dynamic programming-based algorithms for minimizing a number of different performance measures on a single batch processing machine. We also present heuristics for a number of problems concerning parallel identical batch processing machines and we provide worst case error bounds.

433 citations


Journal ArticleDOI
TL;DR: A general model which combines batching and lot-sizing decisions with scheduling with a review of research on this type of model is given.
Abstract: In many practical situations, batching of similar jobs to avoid set-ups is performed whilst constructing a schedule. On the other hand, each job may consist of many identical items. Splitting a job often results in improved customer service or in reduced throughput time. Thus, implicit in determining a schedule is a lot-sizing decision which specifies how a job is to be split. This paper proposes a general model which combines batching and lot-sizing decisions with scheduling. A review of research on this type of model is given. Some important open problems for which further research is required are also highlighted.

430 citations


Proceedings ArticleDOI
02 Dec 1992
TL;DR: A novel algorithm for servicing soft deadline aperiodic tasks in a real-time system in which hard deadline periodic tasks are scheduled using a fixed priority algorithm is presented and is proved to be optimal in the sense that it provides the shortest a Periodic response time among all possible a periodic service methods.
Abstract: A novel algorithm for servicing soft deadline aperiodic tasks in a real-time system in which hard deadline periodic tasks are scheduled using a fixed priority algorithm is presented. This algorithm is proved to be optimal in the sense that it provides the shortest aperiodic response time among all possible aperiodic service methods. Simulation studies show that it offers substantial performance improvements over current approaches, including the sporadic server algorithm. Moreover, standard queuing formulas can be used to predict aperiodic response times over a wide range of conditions. The algorithm can be extended to schedule hard deadline aperiodics and to efficiently reclaim unused periodic service time when periodic tasks have stochastic execution times. >

414 citations


Journal ArticleDOI
TL;DR: The scheduling schemes just discussed are extremes; between the two lie schemes that attempt to minimize the cumulative contribution of uneven processor finishing times and of scheduling overhead.
Abstract: ~lw advantage of , capability of rallel machines, application programs must contain sufficient parallelism, and this parallelism must be effectively scheduled on multiple processors. Loops without dependences among their iterations are a rich source of parallelism in scientific code. Restructuring compilers for sequential programs have been particularly successful in determining when loop iterations are independent and can be executed in parallel. Because of the prevalence of parallel loops, optimal parallel-loop scheduling has received considerable attention in both academic and industrial communities. The fundamental trade-off in scheduling parallel-loop iterations is that of maintaining balanced processor workloads vs. minimizing scheduling overhead. Consider, for example, a loop with N iterations that contain an IF statement. Depending on whether the body of the IF statement is executed, an iteration has LONG or SHORT execution time. If we naively schedule the iterations on P processors in chunks of N/P iterations, a strategy called static chunking (SC), a chunk of one processor may consist of iterations that are all LONG, while a chunk of another processor may consist of iterations that are all SHORT. Hence, different processors may finish at widely different times. Since the loop finishing time is equal to the latest finishing time of any of the processors executing the loop, the overall finishing time may be greater than optimal with SC. Alternatively, if we (also naively) schedule the iterations one at a time, a strategy called self-scheduling (SS), then there will be N scheduling operations. With SS, a processor obtains a new iteration whenever it becomes idle, so the processors finish at nearly the same time and the workload is balanced. Because of the scheduling overhead , however, the overall finishing time may be greater than optimal. The characteristics of the iterations determine which scheme performs better. For instance, variable-length, coarse-grained iterations favor SS, while constant-length, fine-grained iterations favor SC. Even when iterations do not contain conditional statements, their running times are likely to be variable because of interference from their environment (other iterations, the operating system, and other programs). The scheduling schemes just discussed are extremes; between the two lie schemes that attempt to minimize the cumulative contribution of uneven processor finishing times and of scheduling overhead. Such schemes schedule iterations in chunks of sizes greater than one but less than N/P, where size is the number of iterations in the chunk. Both fixed-size and variable-size chunk-ing schemes have been proposed. In …

357 citations


Journal ArticleDOI
TL;DR: A model to evaluate the performance of different combinations of synchronization mechanisms and scheduling policies leads to the conclusion that gang scheduling is required for efficient fine-grain synchronization on multiprogrammed multiprocessors.

308 citations


Journal ArticleDOI
TL;DR: A new procedure for scheduling projects where the availability of resources is constrained is presented, which outperforms the chosen heuristic rules, and also demonstrates that it can generate near-optimal schedules.

268 citations


Proceedings ArticleDOI
01 Jul 1992
TL;DR: To the best of the knowledge, this 4/3-competitive algorithm is the first specifically randomized algorithm for the original, original, on-line scheduling problem, and probably the first directly randomized algorithm of its kind.
Abstract: We consider the on-line version of the original m-machine scheduling problem: given m machines and n positive real jobs, schedule the n jobs on the m machines so as to minimize the make span, the completion time of the last job. In the on-line version, as soon as job j arrives, it must be assigned immediately to one of the m machines.We present two main results. The first is a (2–e)-competitive deterministic algorithm for all m. The competitive ratio of all previous algorithms approaches 2 as m→ ∞. Indeed, the problem of improving the competitive ratio for large m had been open since 1966, when the first algorithm for this problem appeared.The second result is an optimal randomized algorithm for the case m = 2. To the best of our knowledge, our 4/3-competitive algorithm is the first specifically randomized algorithm for the original, m-machine, on-line scheduling problem.

Proceedings ArticleDOI
John Turek1, Joel L. Wolf1, Philip S. Yu
01 Jun 1992
TL;DR: This paper gives an algorithm that selects a family of candidate allotments of processors to tasks, thereby allowing us to use as a subroutine any algorithm A that ‘(solves’ the simpler multiprocessor scheduling problem in which the number of processors allotted to a task is fixed.
Abstract: A parallehzab[e task is one that can be run on an arbitrary number of processors with a running time that depends on the number of processors allotted to it. Consider a parallel system having m identical processors and n independent para//eb.zab/e tasks to be scheduled on those processors. The goal is to find (1) for each task j, an allotment of processors /3$, and, (2) overall, a nonpreemptive schedule assigning the tasks to the processors which minimizes the makespan, or latest task completion time. This multiprocessor scheduling problem is known to be NP-complete in the strong sense. We therefore concentrate on providing a heuristic that has polynomial running time with provable worst case bounds on the suboptimality of the solution. In particular, we give an algorithm that selects a family of (up to n(rn – 1) + 1) candidate allotments of processors to tasks, thereby allowing us to use as a subroutine any algorithm A that ‘(solves” the simpler multiprocessor scheduling problem in which the number of processors allotted to a task is fixed. Our algorithm has the property that for a large class of previously studied algorithms our extension will match the same worst case bounds on the suboptimality of the solution while increasing the computational complexity of A by at most a factor of O(nm). As consequences we get polynomial time algorithms for the following:

Journal ArticleDOI
TL;DR: This paper analyzes a class of two-machine batching and scheduling problems in which the batch processor plays an important role, presents polynomial procedures for some problems, proposes a heuristic, and establishes an upper bound on the worst case performance ratio of the heuristic for the NP-complete problem.
Abstract: We consider a situation in which the manufacturing system is equipped with batch and discrete processors Each batch processor can process a batch limited number of jobs simultaneously Once the process begins, no job can be released from the batch processor until the entire batch is processed In this paper, we analyze a class of two-machine batching and scheduling problems in which the batch processor plays an important role Specifically, we consider two performance measures: the makespan and the sum of job completion times We analyze the complexity of this class of problems, present polynomial procedures for some problems, propose a heuristic, and establish an upper bound on the worst case performance ratio of the heuristic for the NP-complete problem In addition, we extend our analysis to the case of multiple families and to the case of three-machine batching

Proceedings ArticleDOI
01 Aug 1992
TL;DR: A parallel programming tool for scheduling static task graphs and generating the appropriate target code for message passing MIMD architectures and preliminary experiments show performance comparable to the "best" hand-written programs.
Abstract: We describe a parallel programming tool for scheduling static task graphs and generating the appropriate target code for message passing MIMD architectures. The computational complexity of the system is almost linear to the size of the task graph and preliminary experiments show performance comparable to the "best" hand-written programs.

Journal ArticleDOI
TL;DR: An NP-completeness proof for the problem of minimizing the sum of job flow times subject to scheduled maintenance is provided and the shortest processing time (SPT) sequence is shown to have a worst case error bound of 2/7.
Abstract: In this paper, we investigate a single machine scheduling problem of minimizing the sum of job flow times subject to scheduled maintenance. We first provide an NP-completeness proof for the problem. This proof is much shorter than the one given in Adiri et al. [1]. The shortest processing time (SPT) sequence is then shown to have a worst case error bound of 2/7. Furthermore, an example is provided to show that the bound is tight. This example also serves as a counter-example to the 1/4 error bound provided in Adiri et al. [1].

Journal ArticleDOI
TL;DR: A new formulation based on the treatment of the time window constraints as soft constraints that can be violated at a cost and heuristically decompose the problem into an assignment/clustering component and a series of routing and scheduling components is presented.
Abstract: The Vehicle Routing and Scheduling Problem with Time Window constraints is formulated as a mixed integer program, and optimization-based heuristics which extend the cluster-first, route-second algorithm of Fisher and Jaikumar are developed for its solution. We present a new formulation based on the treatment of the time window constraints as soft constraints that can be violated at a cost and we heuristically decompose the problem into an assignment/clustering component and a series of routing and scheduling components. Numerical results based on randomly generated and benchmark problem sets indicate that the algorithm compares favorably to state-of-the-art local insertion and improvement heuristics.

Proceedings ArticleDOI
04 May 1992
TL;DR: The author discusses how this channel can be closed and the performance effects of closing the channel, and the lattice scheduler is introduced, and its use in closing the cache channel is demonstrated.
Abstract: The lattice scheduler is a process scheduler that reduces the performance penalty of certain covert-channel countermeasures by scheduling processes using access class attributes. The lattice scheduler was developed as part of the covert-channel analysis of the VAX security kernel. The VAX security kernel is a virtual-machine monitor security kernel for the VAX architecture designed to meet the requirements of the A1 rating from the US National Computer Security Center. After describing the cache channel, a description is given of how this channel can be exploited using the VAX security kernel as an example. The author discusses how this channel can be closed and the performance effects of closing the channel. The lattice scheduler is introduced, and its use in closing the cache channel is demonstrated. Finally, the work illustrates the operation of the lattice scheduler through an extended example and concludes with a discussion of some variations of the basic scheduling algorithm. >

Proceedings ArticleDOI
01 Dec 1992
TL;DR: Simulation results show that, given a directed acyclic growth (DAG), the graph parallelism of the DAG can accurately predict the number of processors to be used such that a good scheduling length and a good resource utilization can be achieved simultaneously.
Abstract: The authors discuss applications of BTDH (bottom-up top-down duplication heuristic) to list scheduling algorithms (LSAs). There are two ways to use BTDH for LSAs. BTDH can be used with an LSA to form a new scheduling algorithm (LSA/BTDH), and it can be used as a pure optimization algorithm for an LSA (LSA-BTDH). BTDH has been applied with two well-known LSAs: the highest level first with estimated time (HLFET) and the earlier task first (ETF) heuristics. Simulation results show that, given a directed acyclic growth (DAG), the graph parallelism of the DAG can accurately predict the number of processors to be used such that a good scheduling length and a good resource utilization (or efficiency) can be achieved simultaneously. In terms of speedups, LSA/BTDH >or= LSA-BTDH >or= ETF >or= HLFET. Experimental results of scheduling FFT programs, which are written in a single program multiple data (SPMD) programming approach, on NCUBE-2 are also presented. The results confirm the simulation results and show that the speedups of LSA/BTDH and LSA-BTDH are better than the speedups of LSAs. >

Proceedings ArticleDOI
01 Oct 1992
TL;DR: New algorithms for transmission scheduling in multihop broadcast radio networks are presented, showing that, for both types of scheduling, the new algorithms (experimentally) perform consistently better than earlier methods.
Abstract: New algorithms for transmission scheduling in multihop broadcast radio networks are presented. Both link scheduling and broadcast scheduling are considered. In each instance scheduling algorithms are given that improve upon existing algorithms both theoretically and experimentally. Theoretically, it is shown that tree networks can be scheduled optimally, and that arbitrary networks can be scheduled so that the schedule is bounded by a length that is proportional to a function of the network thickness times the optimum. Previous algorithms could guarantee only that the schedules were bounded by a length no worse than the maximum node degree, the algorithms presented here represent a considerable theoretical improvement. Experimentally, a realistic model of a radio network is given and the performance of the new algorithms is studied. These results show that, for both types of scheduling, the new algorithms (experimentally) perform consistently better than earlier methods.

Journal ArticleDOI
TL;DR: This work considers the scheduling problem in which jobs with release dates and delivery times are to be scheduled on one machine, and presents a 4/3-approximation algorithm for the problem with precedence constraints among the jobs, and two polynomial approximation schemes for the problems without precedence constraints.
Abstract: We consider the scheduling problem in which jobs with release dates and delivery times are to be scheduled on one machine. We present a 4/3-approximation algorithm for the problem with precedence constraints among the jobs, and two polynomial approximation schemes for the problem without precedence constraints. At the core of each of the algorithms presented is Jackson's Rule-a simple but seemingly robust heuristic for the problem.

Journal ArticleDOI
TL;DR: In this article, it was shown that no online scheduling algorithm can guarantee a cumulative value greater than 1/4th the value obtainable by a clairvoyant scheduler, i.e. if a task request is successfuly scheduled to completion, a value equal to the task's execution time is obtained; otherwise no value is obtained.
Abstract: With respect to on-line scheduling algorithms that must direct the service of sporadic task requests we quantify the benefit of clairvoyancy, i.e., the power of possessing knowledge of various task parameters of future events. Specifically, we consider the problem of preemptively sheduling sporadic task requests in both uni- and multi-processor environments. If a task request is successfuly scheduled to completion, a value equal to the task's execution time is obtained; otherwise no value is obtained. We prove that no on-line scheduling algorithm can guarantee a cumulative value greater than 1/4th the value obtainable by a clairvoyant scheduler; i.e., we prove a 1/4th upper bound on the competitive factor of on-line real-time schedulers. We present an online uniprocessor scheduling algorithm TD 1 that actually has a competitive factor of 1/4; this bound is thus shown to be tight. We further consider the effect of restricting the amount of overloading permitted (the loading factor), and quantify the relationship between the loading factor and the upper bound on the competitive factor. Other results of a similar nature deal with the effect of value densities (measuring the importance of type of a task). Generalizations to dual-processor on-line scheduling are also considered. For the dual-processor case, we prove an upper bound of 1/2 on the competitive factor. This bound is shown to be tight in the special case when all the tasks have the same density and zero laxity.

Patent
12 May 1992
TL;DR: The scheduler of the present invention comprises an integrated system of hardware and software which is integrated into the already existing training system as mentioned in this paper, and is embedded as a software subsystem in the trainingsystem, and is delivered on a type 80386 integrated circuit based computer element at each training site.
Abstract: A scheduling system and method for use with training systems. The exemplary embodiment of the scheduler is incorporated into an aircrew training system for a military aircraft. A training system for training aircrews involves the use of academic media such as classrooms, training devices such as ground-based flight simulation trainers, and training flights in the air. In addition, it involves a computer network having terminals located at a central site, a plurality of training sites, and other remote sides. The computer data base is located at a central site, and thetraining facilities are located at training sites. Typically, computer terminals are connected together in a computer network by both dedicated and dial-up telephonelines, and typically the network may employ Intel TM 80386 machines running UNIX TM V, release 3.2. The scheduler of the present invention comprises an integrated system of hardware and software which is integrated into the already existing training system. It is embedded as a software subsystem in the trainingsystem, and is delivered on a type 80386 integrated circuit based computer element at each training site.

Journal ArticleDOI
Soo-Mook Moon1, Kemal Ebcioglu
10 Dec 1992
TL;DR: A new algorithm for parallelization of sequential code that eliminates anti and output dependence8 by renaming registers on an as-needed basis during scheduling is described, and preliminary results on AIX utilities indicate that it requires significantly less compilation time than the percolation scheduling approach.
Abstract: We describe a new algorithm for parallelization of sequential code that eliminates anti and output dependences by renaming registers on an as-needed basis during scheduling. A dataflow attribute at the beginning of each basic block indicates what operations are available for moving up through this basic block. Scheduling consists of choosing the best operation from the set of opemtions that can move to a point, moving the instances of the opemtion to the point, making bookkeeping copies for edges that join the moving path but are not on it, and updating the dataflow attributes of basic blocks only on the paths that were tmversed by the instances of the moved opemtions. The code motions are done globally without going through atomic transformations of percolation scheduling, for better eficiency. For performing the code motions, we use an intermediate representation that is directly executable as sequential RISC code, rather than VLIW de. As a result, the new algorithm can be used to generate parallelized code for multiple ALU superscalar processors as well. The enhanced pipeline scheduling algorithm for software pipelining of arbitrary code is reformulated within the framework of the new sequential RISC representation. The new algorithm has been implemented, and preliminary results on AIX utilities indicate that it requires significantly less compilation time than the percolation scheduling approach.

Proceedings ArticleDOI
Kazutoshi Wakabayashi1, H. Tanaka
01 Jul 1992
TL;DR: An algorithm is proposed which generates a single finite state machine controller from parallel individual control sequences derived in the global parallelization process, which can parallelize multiple nests of conditional branches and optimize across the boundaries of basic blocks.
Abstract: The authors present a global scheduling method based on condition vectors. The proposed method exploits global parallelism. The technique can schedule operations independent of control dependencies. It transforms the control structure of the given behavior drastically, while preserving semantics to minimize the number of states in final schedule. The method can parallelize multiple nests of conditional branches and optimize across the boundaries of basic blocks. It can also optimize all possible execution paths. An algorithm is proposed which generates a single finite state machine controller from parallel individual control sequences derived in the global parallelization process. Experimental results prove that the global parallelization is very effective. >

Patent
12 May 1992
TL;DR: In this article, an improved method of executing a plurality of computer application programs on a multicomputer is disclosed, which consists of an allocator and scheduler component, which comprises processing logic and data for implementing the task scheduler of the present invention.
Abstract: An improved method of executing a plurality of computer application programs on a multicomputer is disclosed. The present invention pertains to a task scheduling system in a multicomputer having nodes arranged in a network. The present invention comprises an allocator and scheduler component, which comprises processing logic and data for implementing the task scheduler of the present invention. The allocator and scheduler operates in conjunction with a partition to assign tasks to a plurality of nodes. A partition is an object comprising a plurality of items of information and optionally related processing functions for maintaining a logical environment for the execution of tasks of one or more application programs. Application programs are allowed to execute on one or more nodes of a partition. Moreover, a node may be assigned to more than one partition and more than one application program may be loaded on a single node. The allocator and scheduler provides allocator procedures used by application programs for identifying a node or group of nodes for inclusion in a partition. The allocator and scheduler also provides several data areas for the storage of information relevant to the allocation and scheduling of tasks. These data areas of the allocator and scheduler include a partition data area, an application data area, and a layer data area. The present invention provides a means and method for hierarchically linking application programs, layers, and partitions together to provide an optimal execution environment for the execution of a plurality of tasks in a multicomputer.

Proceedings ArticleDOI
01 Sep 1992
TL;DR: A set of architectural features and compile-time scheduling support referred to as sentinel scheduling is introduced, which provides an effective framework for compiler-controlled speculative execution that accurately detects and reports all exceptions.
Abstract: Speculative execution is an important source of parallelism for VLIW and superscalar processors. A serious challenge with compiler-controlled speculative execution is to accurately detect and report all program execution errors at the time of occurrence. In this paper, a set of architectural features and compile-time scheduling support referred to as sentinel scheduling is introduced. Sentinel scheduling provides an effective framework for compiler-controlled speculative execution that accurately detects and reports all exceptions. Sentinel scheduling also supports speculative execution of store instructions by providing a store buffer which allows probationary entries. Experimental results show that sentinel scheduling is highly effective for a wide range of VLIW and superscalar processors.

Proceedings ArticleDOI
02 Dec 1992
TL;DR: The proposed model for the analysis of processor scheduling policies is novel in that it incorporates minimum as well as maximum processing time requirements of tasks.
Abstract: The problem of scheduling a set of sporadic tasks that share a set of serially reusable, single unit software resources on a single processor is considered. The correctness conditions are that: each invocation of each task completes execution at or before a well-defined deadline; and a resource is never accessed by more than one task simultaneously. An optimal online algorithm for scheduling a set of sporadic tasks is presented. The algorithm results from the integration of a synchronization scheme for access to shared resources with the earliest deadline first algorithm. A set of relations on task parameters that are necessary and sufficient for a set of tasks to be schedulable is also derived. The proposed model for the analysis of processor scheduling policies is novel in that it incorporates minimum as well as maximum processing time requirements of tasks. The scheduling algorithm and the sporadic tasking model have been incorporated into an operating system kernel and used to implement several real-time systems. >

Journal ArticleDOI
TL;DR: The modelling approach presented in this paper has been implemented in a commercial courier vehicle scheduling system and was judged to be ‘very useful’ by users in a number of different metropolitan areas in the United States.
Abstract: Many research papers have presented mathematical models for vehicle scheduling. Several of these models have been embedded in commercial decision support systems for intra-city vehicle scheduling for launderies, grocery stores, banks, express mail customers, etc. Virtually all of these models ignore the important issue of time-dependent travel speeds for intra-city travel. Travel speeds (and times) in nearly all metropolitan areas change drastically during the day because of congestion in certain parts of the city road network. The assumption of constant (time-independent) travel speeds seriously affects the usefulness of these models. This is particularly true when time windows (earliest and latest stop time constraints) and other scheduling issues are important. This research proposes a parsimonious model for time-dependent travel speeds and several approaches for estimating the parameters for this model. An example is presented to illustrate the proposed modelling approach. The issue of developing algorithms to find near-optimal vehicle schedules with time-dependent travel speeds is also discussed. The modelling approach presented in this paper has been implemented in a commercial courier vehicle scheduling system and was judged to be ‘very useful’ by users in a number of different metropolitan areas in the United States.

Proceedings ArticleDOI
09 Jun 1992
TL;DR: Algorithms for scheduling a class of systems in which all the tasks execute on different processors in turn in the same order are described and a heuristic for the NP-hard general case is evaluated.
Abstract: Algorithms for scheduling a class of systems in which all the tasks execute on different processors in turn in the same order are described. This end-to-end scheduling problem is known as the flow-shop problem. Two cases in which the problem is tractable are presented, and a heuristic for the NP-hard general case is evaluated. The traditional flow-shop model is generalized in two directions. First, an algorithm for scheduling flow shops in which tasks can be serviced more than once by some processors is presented. Second, a heuristic algorithm for scheduling flow shops with periodic tasks is described. Scheduling systems with more than one flow shop are considered. >

Proceedings ArticleDOI
02 Dec 1992
TL;DR: An optimal online scheduling algorithm for overloaded systems is presented and is optimal in the sense that it gives the best competitive factor possible relative to an offline (i.e., clairvoyant) scheduler when the system is underloaded.
Abstract: An optimal online scheduling algorithm for overloaded systems is presented. It is optimal in the sense that it gives the best competitive factor possible relative to an offline (i.e., clairvoyant) scheduler. It also gives 100% of the value of a clairvoyant scheduler when the system is underloaded. In fact the performance guarantee of D/sup over/ is even stronger: D/sup over/ schedules to completion all tasks in underloaded periods and achieves at least 1/(1+ square root k)/sup 2/ of the value a clairvoyant algorithm can get during overloaded periods. The model accounts for different value densities and generalizes to soft deadlines. >