scispace - formally typeset
Search or ask a question

Showing papers on "Task (computing) published in 1989"


Book
03 Jan 1989
TL;DR: In this paper, the problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service, and it is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization which may be as low as 70 percent for large task sets.
Abstract: The problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service. It is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization which may be as low as 70 percent for large task sets. It is also shown that full processor utilization can be achieved by dynamically assigning priorities on the basis of their current deadlines. A combination of these two scheduling techniques is also discussed.

5,397 citations


13 Apr 1989
TL;DR: In this article, the authors developed an efficient scheduling algorithm based on heuristic algorithms to schedule a set of TASKS on a multi-rocessor real-time system.
Abstract: HARD REAL-TIME SYSTEMS REQUIRE BOTH FUNCTIONALLY CORRECT EXECUTIONS AND RESULTS THAT ARE PRODUCED ON TIME. THIS MEANS THAT THE TASK SCHEDULING ALGORITHM IS AN IMPORTANT COMPONENT OF THESE SYSTEMS. IN THIS PAPER, EFFICIENT SCHEDULING ALGORITHMS BASED ON HEURISTIC FUNCTIONS ARE DEVELOPED TO SCHEDULE A SET OF TASKS ON A MULTIPROCESSOR SYSTEM. THE TASKS ARE CHARACTERIZED BY WORST CASE COMPUTATION TIMES, DEADLINES AND RESOURCES REQUIREMENTS. STARTING WITH AN EMPTY PARTIAL SCHEDULE, EACH STOP OF THE SEARCH EXTENDS THE CURRENT PARTIAL SCHEDULE WITH ONE OF THE TASKS YET TO BE SCHEDULED. THE HEURISTIC FUNCTIONS USED IN THE ALGORITHM ACTIVELY DIRECT THE SEARCH FOR A FEASIBLE SCHEDULE, I.E., THEY HELP CHOOSE `THE TASK'' THAT EXTENDS THE CURRENT PARTIAL SCHEDULE. TWO SCHEDULING ALGORITHMS ARE EVALUATED VIA SIMULATION. FOR EXTENDING THE CURRENT PARTIAL SCHEDULE, ONE OF THE ALGORITHMS CONSIDERS, AT EACH STEP OF THE SEARCH, `ALL'' THE TASKS THAT ARE YET TO BE SCHEDULED AS CANDIDATES. THE SECOND ALGORITHM IS SHOWN TO BE VERY EFFECTIVE WHEN THE MAXIMUM ALLOWABLE SCHEDUL- ING OVERHEAD IS FIXED. THIS ALGORITHM IS HENCE APPROPRIATE FOR DYNAMIC SCHEDULING IN REAL-TIME SYSTEMS.

339 citations


Journal ArticleDOI
TL;DR: This paper introduces a dynamic strategy called WorkCrews for controlling the use of parallelism on small-scale, tightly-coupled multiprocessors and favors coarse-grained subtasks, which reduces further the overhead of task decomposition.
Abstract: In implementing parallel programs, it is important to find strategies for controlling parallelism that make the most effective use of available resources. In this paper, we introduce a dynamic strategy called WorkCrews for controlling the use of parallelism on small-scale, tightly-coupled multiprocessors. In the WorkCrew model, tasks are assigned to a finite set ofworkers. As in other mechanisms for specifying parallelism, each worker can enqueue subtasks for concurrent evaluation by other workers as they become idle. The WorkCrew paradigm has two advantages. First, much of the work associated with task division can be deferred until a new worker actually undertakes the subtask and avoided altogether if the original worker ends up executing the subtask serially. Second, the ordering of queue requests under WorkCrews favors coarse-grained subtasks, which reduces further the overhead of task decomposition.

107 citations


Journal ArticleDOI
TL;DR: A distributed program is modelled as a stochastic network of tasks related by their rendezvous requests, which is assumed to have maximum concurrency as if each task were executed on its own processor.

92 citations


Proceedings ArticleDOI
01 Aug 1989
TL;DR: An algorithm is presented for automatically detecting non-determinacy in parallel programs that utilize event style synchronization instructions, using the Post, Wait, and Clear primitives.
Abstract: One of the major difficulties of explicit parallel programming for a shared memory machine model is detecting the potential for nondeterminacy and identifying its causes. There will often be shared variables in a parallel program, and the tasks comprising the program may need to be synchronized when accessing these variables. This paper discusses this problem and presents a method for automatically detecting non-determinacy in parallel programs that utilize event style synchronization instructions, using the Post, Wait, and Clear primitives. With event style synchronization, especially when there are many references to the same event, the difficulty lies in computing the execution order that is guaranteed given the synchronization instructions and the sequential components of the program. The main result in this paper is an algorithm that computes such an execution order and yields a Task Graph upon which a nondeterminacy detection algorithm can be applied. We have focused on events because they are a frequently used synchronization mechanism in parallel versions of Fortran, including Cray [Cray87], IBM [IBM88], Cedar [GPHL88], and PCF Fortran [PCF88].

75 citations


Patent
23 Oct 1989
TL;DR: Unit of Work object classes as mentioned in this paper allow concurrent processing through Unit of Work levels and instances while maintaining the integrity of the data in the database, and each new instance assigned to a task is an instance of the unit of work object class.
Abstract: A Unit of Work object class for an object oriented database management system provides concurrent processing through Unit of Work levels and instances while maintaining the integrity of the data in the database. Each new Unit of Work assigned to a task is an instance of the Unit of Work object class. A Unit of Work manager controls each step such that manipulation of the data occurs to the copies at that particular level for that particular instance. Only after all levels have been completed satisfactorily will a "Commit" occur to the data in the database. If completion is not satisfactory, Rollback of the levels occur, thus preserving data integrity. The Unit of Work manager can also switch control between Unit of Work instances, thus permitting simultaneous performance of tasks.

63 citations


Book
03 Jan 1989
TL;DR: In this paper, the authors present a scheduling algorithm which works dynamically and on loosely coupled distributed systems for tasks with hard real-time constraints; i.e., the tasks must meet their deadlines.
Abstract: Most systems which are required to operate under severe real-time constraints assume that all tasks and their characteristics are known a priori. Scheduling of such tasks can be done statistically. Further, scheduling algorithms operating under such conditions are usually limited to multiprocessor configurations. The authors present a scheduling algorithm which works dynamically and on loosely coupled distributed systems for tasks with hard real-time constraints; i.e., the tasks must meet their deadlines. It uses a scheduling component local to every node and a distributed scheduling scheme which is specifically suited to hard real-time constraints and other timing considerations. Periodic tasks, nonperiodic tasks, scheduling overheads, communication overheads due to scheduling and preemption are all accounted for in the algorithm. Simulation studies are used to evaluate the performance of the algorithm.

63 citations


Patent
17 Jul 1989
TL;DR: A task queue is structured as a single-keyed indexed file in which the key has a most significant portion indicating a priority level and a less significant portion that is ordered with the loading of the tasks into the queue.
Abstract: A task queue is structured as a single-keyed indexed file in which the key has a most significant portion indicating a priority level and a less significant portion that is ordered with the loading of the tasks into the queue. For any given task record in the queue, the less significant portion of the key is determinable from a respective task identifier. Preferably the less significant portion of the key is a "time stamp" including the current date when the task was created and a representation of the data processor's internal 24 hour time clock, and a task identification number is formed by appending a node number to the time stamp in the event that the system has multiple processors capable of creating different tasks at the same time. This format of the single key causes an internal ordering of the records in the queue that is sequential with respect to the less significant portion of the key within blocks of records having the same priority. Therefore, due to the relationship between the key and the task identification number for each task record in the queue, it is possible to quickly search for the record having a requested task identification number. Conventional memory management facilities for accessing key-indexed files can be used for searching the queue. In this case the queue is searched by random or "key next" access to repeatedly step through the possible priority levels until a record having a matching key is found or the end of file is reached. When a record having a matching key is found, the requested task identification number is compared to the identification field of the record. If there is a match, the desired record has been found. If not, then searching continues by sequential access until there is a match of the identification numbers or until the keys no longer match.

40 citations


Book
03 Jan 1989
TL;DR: A task allocation model that allocates application tasks among processors in distributed computing systems satisfying: 1) minimum interprocessor communication cost, 2) balanced utilization of each processor, and 3) all engineering application requirements is presented.

29 citations


Journal ArticleDOI
01 Dec 1989
TL;DR: The optimization problem discussed in this paper is the translation of an SQL query into an efficient parallel execution plan for a multiprocessor database machine under the performance goal of reduced response times as well as increased throughput in a multiuser environment.
Abstract: The optimization problem discussed in this paper is the translation of an SQL query into an efficient parallel execution plan for a multiprocessor database machine under the performance goal of reduced response times as well as increased throughput in a multiuser environment. We describe and justify the most important research problems which have to be solved to achieve this task, and we explain our approach to solve these problems.

27 citations


Proceedings ArticleDOI
M.S. Lakshmi1, Philip S. Yu1
06 Feb 1989
TL;DR: The effectiveness of parallel processing of relational join operations is examined and the skew in the distribution of join attribute values and the stochastic nature of the task processing times are identified as the major factors that can affect the effective utilization of parallelism.
Abstract: The effectiveness of parallel processing of relational join operations is examined. The skew in the distribution of join attribute values and the stochastic nature of the task processing times are identified as the major factors that can affect the effective utilization of parallelism. When many small processors are used in the parallel architecture, the skew can result in some processors becoming sources of bottleneck while other processors are being under utilized. Even in the absence of skew, the variations in the processing times of the parallel tasks belonging to a query can lead to high task synchronization delay and impact the maximum speedup achievable through parallel execution. Analytic expressions for join execution time are developed for different task time distributions with or without skew. >

Patent
Richard Hatle1
22 Dec 1989
TL;DR: In this article, an improved device driver for a real-time computer system operated by a non-multitasking operation system and controlling the operation of polled peripheral devices which do not have an interrupt generation capability is presented.
Abstract: An improved device driver for a real time computer system operated by a nonmultitasking operation system and controlling the operation of polled peripheral devices which do not have an interrupt generation capability. The improved device driver of the present invention operates on the general principle of releasing control back to the operating system before completion of a task when a peripheral device is not in a state of readiness to perform its task. The technical approach is a single entry software routine utilizing a state variable to keep track of the internal state of execution of the device driver, which relinquishes control back to the operating system while waiting for the device to become ready to respond, thus allowing the CPU to execute other tasks. Before releasing control to the operating system, the state controlled device driver sets up a system timer interrupt, or sets a system request bit, with a locally optimized time interval which will bring control back to the device driver to assure subsequent continuation and completion of the task.

Book ChapterDOI
01 Jan 1989
TL;DR: In this article, a more general feedback mechanism is proposed: when a customer completes his i-th service, he departs from the system with probability l-p(i) and is fed back with probability p(i), and the resulting queueing model has the property that the joint queue-length distribution of type-i customers, i=1,2,⋯, is of product-form type.
Abstract: In many modern computer-communication systems, a job may be processed in several phases, or a job may generate new tasks. Such phenomena can be modeled by service systems with feedback. In the queueing literature, attention has been mainly devoted to single-service queues with so-called Bernoulli feedback: when a customer (task) completes his service, he departs from the system with probability l-p and is fed back with probability p. In the present study a more general feedback mechanism is allowed: when a customer completes his i-th service, he departs from the system with probability l-p(i) and is fed back with probability p(i). We mainly restrict ourselves to the case of a Poisson external arrival process and identically, negative exponentially, distributed service times at each service. The resulting queueing model has the property that the joint queue-length distribution of type-i customers, i=1,2,⋯, is of product-form type. This property is exploited to analyse the sojourn-time process.

Proceedings ArticleDOI
03 Jan 1989
TL;DR: An architecture is proposed that allows fast procedure calls, low-overhead task switches, and primitives, which assist in queue-oriented intertask communications by managing the registers as noncontiguous register windows, which are hidden from the applications program.
Abstract: An architecture is proposed that allows fast procedure calls, low-overhead task switches, and primitives, which assist in queue-oriented intertask communications. This is accomplished by managing the registers as noncontiguous register windows. The details of the register granularity are hidden from the applications program. The architecture is based on a VLSI CPU called the MULTIS, which is capable of handling the dynamically created data of multiple tasks in on-chip storage. This ability enables tasking systems to benefit from the use of large on-chip memories such as those found in RISC (reduced-instruction-set computer) technologies. Other features of the architecture include efficient interrupt handling and provision for register-based task local, procedure-global dynamic storage. >

Proceedings ArticleDOI
05 Jun 1989
TL;DR: This method shows substantial execution time performance increases over other methods for problems where the required execution time is unpredictable, and is used for two application areas: distributed execution of recovery blocks and OR-parallelism in Prolog.
Abstract: The task of concurrently computing alternative solutions to a problem where only one of the solutions is needed is examined. In this case the rule for selecting between the solutions is faster first, where the first successful alternative is selected. For problems where the required execution time is unpredictable, this method shows substantial execution time performance increases over other methods. In order to test the utility of the design, it is used for two application areas: distributed execution of recovery blocks and OR-parallelism in Prolog. The authors present: (1) a model for selection of alternatives in a sequential setting: (2) a transformation that allows alternatives to execute concurrently; (3) a description of the semantics-preservation mechanism; and (4) parameterization of where the performance improvements can be expected. Additionally, examples of application areas for the method are given. >

Patent
15 Nov 1989
TL;DR: In this article, the authors propose to equalize a load of a CPU by allowing each CPU to execute each task by a task scheduler by load information and task information and executing a load distribution of each CPU.
Abstract: PURPOSE:To equalize a load of a CPU by allowing each CPU to execute each task by a task scheduler by load information and task information and executing a load distribution of each CPU. CONSTITUTION:By a CPU load monitoring mechanism 5, the respective CPU states and load factors extending from a CPU1 to a CPUn are derived at every prescribed period, and these load information is recorded in a load information management table 8. A task scheduler 10 refers to the CPU state and the load factor of the load information management table 8, and also, refers to task information of a task information management table 9 and allows the CPU whose load factor is high to execute a task in which a ratio of an I/O operation is large and a CPU use ratio is small. Also, the CPU whose load factor is low is allowed to execute a task in which the ratio of the I/O operation is small and the CPU use ratio is large, and the CPU waiting for the execution is allowed to execute a task whose CPU use ratio is the largest. In such a way, a load of the CPU can be equalized.

Patent
20 Jan 1989
TL;DR: Early start mode data transfer as mentioned in this paper is a method and apparatus for loosely coupling a plurality of processors in a multiprocessor system to perform a coordinated task, where the coordination of the task is implemented by a data transfer apparatus which coordinates the reading and writing of data files into and out of the data buffer (101) of the control unit (100).
Abstract: A method and apparatus for loosely coupling a plurality of processors in a multiprocessor system to perform a coordinated task. The coordination of the task is implemented by an early start mode data transfer apparatus which coordinates the reading and writing of data files into and out of the data buffer (101) of the control unit (100) so that a data file can be written into the data buffer (101) while another file is concurrently being read out of the data buffer (101). Thus, both the host computer (130) and the associated tape drive units (140) can be active at the same time that data files are being read from or written into the data buffer (100) of the tape control unit (101).

Patent
13 Jun 1989
TL;DR: In this article, a set of "tasks" are defined for changing the states of multiple-state resources and causing software resources to produce output data, and a sequence of tasks is produced -to control the systems so as to assure valid data collection and protect physical resources from abuse.
Abstract: Computer-controlled test and measurement systems, including resources having multiple states and resources having multiple inputs, are modeled as data flow diagrams of topologically interconnected resources. A set of "tasks" are defined for changing the states of multiple-state resources and causing software resources to produce output data. Methods and apparatus, including internal and external task ordering rules, are provided to automatically interleave such tasks and implement input-ordering restrictions. Thereby, a sequence of tasks is produced - to control the systems so as to assure valid data collection and protect physical resources from abuse. Data structures are illustrated for implementing the invention in an object-oriented programming environment.

Patent
27 Jan 1989
TL;DR: In this paper, a scheduler checks whether or not the stack positions of tasks A-C are beyond the borders of the stack areas of tables 61-63 indicating the states of the tasks A -C when an interruption S1 for the tasks under control is initiated.
Abstract: PURPOSE:To prevent a system from running away by applying an interruption and stopping the system when 1st and 2nd check means decide that a stack position is out of a stack area or that the border flag of a task is broken. CONSTITUTION:A scheduler 5 checks whether or not the stack positions of tasks A-C are beyond the borders of the stack areas of tables 61-63 indicating the states of the tasks A-C when an interruption S1 for the tasks A-C under control is initiated. When they are within the ranges, it is checked whether or not the border flags of the stack areas 71-73 are rewritten. When the stack positions of the tasks A-C are beyond the borders of the stack areas or when the border flags of the stack areas 71-73 are rewritten, the system is stopped. Consequently, an initial system runaway is precluded and which of the task A-C is broken is known.

DOI
01 Jan 1989
TL;DR: This method achieves a solution competitively by using the traditional approach of multiple computers cooperating on the solution to a problem and using "fastest first," where the first successful alternative is selected.
Abstract: We examine the task of concurrently computing alternative solutions to a problem. We restrict our interest to the case where only one solution is needed; in this case we need some rule for selecting between the solutions. We use "fastest first," where the first successful alternative is selected. For problems where the required execution time is unpredictable this method can show substantial execution time performance increases. These increases are dependent on the mean execution time of the alternatives, the fastest execution time, the overhead involved in concurrent computation, and the overhead of selecting and deleting alternatives. Rather than using the traditional approach of multiple computers cooperating on the solution to a problem, this method achieves a solution competitively. Among the problems with exploring multiple alternatives in parallel are side-effects and combinatorial explosion in the amount of state which must be preserved. These are solved by process management and an application of "copy-on-write" virtual memory management. The side effects resulting fiom interprocess communication are handled by a specialized message layer which interacts with process management. We show how the scheme for parallel execution can be applied to several application areas. The applications are distributed execution of recovery blocks, OR-parallelism in Prolog, and polynomial root-finding.

Journal ArticleDOI
TL;DR: A scheme for macropipelining an L-stage image-analysis algorithm into a message-passing Ns-processor multicomputer system in which Ns > L is presented, believing that this scheme yields effective architectures for high-speed processing of long sequences of images.
Abstract: We present a scheme for macropipelining an L-stage image-analysis algorithm into a message-passing Ns-processor multicomputer system in which Ns > L. The resulting architectures achieve high speeds in processing multiple images. Most image-processing applications consist of a sequence of tasks, e.g., preprocessing, detection, segmentation, feature extraction, and classification. This sequence lends itself to an assignment of the tasks to a series-connected set of processors, or pipelining of the tasks. We refer to this form of pipelining as macropipelining. To minimize the effects of throughput-limiting tasks, or bottlenecks, in this pipeline, we introduce a performance model that accounts for both the computation aspects and the communication aspects of parallel processing. With the help of this model, we assign the appropriate number of processors to each task so as to balance the workloads. We then generate a problem graph describing the relationships among the tasks. We use an estimator of the frame-time of the image-processing system as an objective function for choosing a mapping of the problem processing graph into a system graph. This estimator takes account of computation times and communication intensities the tasks in the problem graph, and it accounts for link contentions. To find an efficient mapping, we use a among heuristic optimization in which possible bottlenecks are given high priority in the mapping procedure. We tested our macropipelining scheme on a target-recognition algorithm in a simulated hypercube computer system. The results support our belief that this scheme yields effective architectures for high-speed processing of long sequences of images.

Patent
19 May 1989
TL;DR: In this article, an agent engine is used to intercept semantic commands sent from an action processor to a command processor, to prevent the semantic command from being executed by the agent engine.
Abstract: An application program (100) includes an action processor (101) which receives messages containing user syntactic actions. These actions are translated into semantic commands. The semantic commands are sent to a command processor (102) for execution. The preferred embodiment of the computing system additionally includes an agent engine (108). The agent engine (108) may be used to perform many functions. It may be used to receive semantic commands from an application (100), and to record the semantic commands for later playback. It may be used to send semantic commands from a task language file (131) to an application program (100) for execution by the command processor (102). It may be used to intercept semantic commands sent from the action processor (101) to the command processor (102). After the command is intercepted, the agent engine (108) may be used to allow the semantic command to be executed, to prevent the semantic command from being executed.

Patent
31 Aug 1989
TL;DR: In this article, the authors propose to ensure the quick switch of tasks by saving and restoring a register with the use of a tag bit and a saving area address register and using the bus idle time during the execution of a switched task or a procedure or every time register reference and replacement instructions are issued.
Abstract: PURPOSE:To ensure the quick switch of tasks by saving and restoring a register with the use of a tag bit and a saving area address register and using the bus idle time during the execution of a switched task or a procedure or every time register reference and replacement instructions are issued. CONSTITUTION:The register reference/replacement instructions are carried out after a called task is started. In this case, the present register contents are saved into a saving area pointed by a calling side saving area address register 21 as long as the tag bit of the relevant register is equal to 0. In addition, the data are read out of a saving area pointed by a called side saving area address register 22 and the register is restored. In such a way, the register is not saved nor restored at the switch of registers but the tasks are first switched. Thus the register is saved and restored by means of an idle bus during the run of the switched task or at the time point when the register is referred to and replaced in a task execution mode. As a result, the tasks are switched quickly.

01 Jan 1989
TL;DR: In this paper, the authors consider parallel execution of structured jobs with real-time constraints in (possibly heterogeneous) multiprocessor systems, where a job is composed of a set of tasks and a partial order specifying the precedence constraints between the tasks.
Abstract: We consider parallel execution of structured jobs with real time constraints in (possibly heterogeneous) multiprocessor systems. A job is composed of a set of tasks and a partial order specifying the precedence constraints between the tasks. The task processing times are random variables with known probability distribution functions. The interarrival times of the jobs are also random variables with arbitrary distributions. The real time constraints are specified by reference times, also called soft real-time deadlines. In the discussion we assume first that all the jobs have the same task graph, i.e. the same task set and the same partial order. We assume that there is a predefined mapping from the set of tasks onto the set of machines, identical for all jobs, that allocates tasks to machines. We focus on dynamic scheduling policies which do not use information on the processing times of the tasks to be scheduled. The policies can be non-preemptive or preemptive-resume. For non-preemptive policies, we assume that task processing times are independently and identically distributed, whereas for the preemptive ones, we assume that they are independently and identically distributed and come from some specific distributions. Some of the results that are obtained generalize to the case where jobs have a random structure. This more difficult case is treated at the end of the paper. We are interested in such performance criteria as the number of jobs in the system, the throughput in term of jobs and the job lag times, which correspond to the differences between the reference times and the completion times of tasks or jobs. We show that FCFS policies, applied at task level (which imply FIFO for jobs), stochastically minimize the number of jobs and maximize the system throughput. We establish that FCFS policies minimize the vector of the transient response times in the sense of Schur convex ordering, and minimize the stationary response times in the convex ordering sense. Awe c real time applications, we prove that within the class of local order preserving policies that is defined in the paper, the SRF (shortest reference first) and LRF (longest reference first) policies bound respectively from below and from above the lag times vector of the n first jobs, in the Schur convex sense. This in turn yields an analogous convex ordering among the stationary job lag times.

Proceedings ArticleDOI
13 Dec 1989
TL;DR: An algorithm that is able to estimate adaptively the performance of a human operator in a series of overlapping tasks is presented, which has its foundations in the multiple resource pool model of human operator workload.
Abstract: An algorithm that is able to estimate adaptively the performance of a human operator in a series of overlapping tasks is presented. The algorithm has its foundations in the multiple resource pool model of human operator workload. Each task performed by the operator is split into a number of subtasks, with each subtask in turn modeled by a finite-impulse-response (FIR) filter channel. As each subtask is executed, the channel bandwidths are updated to provide the weights of the FIR filter and estimates of the correlation between the different channels. The correlation gives an indication of the ability to perform tasks simultaneously. The algorithm was tested in a real-time operator framework in which the operator had to perform different simple tasks simultaneously. >

30 Jun 1989
TL;DR: DYNAMIC SCHEDULing policies which do not use information on the PROCESSing times of the TASKS to be SCHEDuled, which can be non-PREEMPTIVE or preemptive, are focused on.
Abstract: WE CONSIDER PARALLEL EXECUTION OF STRUCTURED JOBS WITH REAL TIME CON- STRAINTS IN (POSSIBLY HETEROGENEOUS) MULTIPROCESSOR SYSTEMS. A JOB IS COM- POSED OF A SET OF TASKS AND A PARTIAL ORDER SPECIFYING THE PRECEDENCE CON- STRAINTS BETWEEN THE TASKS. THE TASK PROCESSING TIMES ARE RANDOM VARIABLES WITH KNOWN PROBABILITY DISTRIBUTION FUNCTIONS. THE INTERARRIVAL TIME OF THE JOBS ARE ALSO RANDOM VARIABLES WITH ARBITRARY DISTRIBUTIONS. THE REAL TIME CONSTRAINTS ARE SPECIFIED BY REFERENCE TIMES, ALSO CALLED SOFT REAL-TIME DEADLINES. IN THE DISCUSSION WE ASSUME FIRST THAT ALL THE JOBS HAVE THE SAME TASK GRAPH, I.E, THE SAME TASK SET AND THE SAME PARTIAL ORDER. WE ASSUME THAT THERE IS A PREDEFINED MAPPING FROM THE SET OF TASKS ONTO THE SET OF TASKS ONTO THE SET OF MACHINES, IDENTICAL FOR ALL JOBS, THAT ALLO- CATES TASKS TO MACHINES. WE FOCUS ON DYNAMIC SCHEDULING POLICIES WHICH DO NOT USE INFORMATION ON THE PROCESSING TIMES OF THE TASKS TO BE SCHEDULED. THE POLICIES CAN BE NON-PREEMPTIVE OR PREEMPTIVE-RESUME. FOR NON-PREEMPTIVE POLICIES, WE ASSUME THAT TASKS PROCESSING TIMES ARE INDEPENDENTLY AND IDEN- TICALLY DISTRIBUTED, WHEREAS FOR THE PREEMPTIVE ONES, WE ASSUME THAT THEY ARE INDEPENDENTLY AND IDENTICALLY DISTRIBUTED AND COME FROM SOME SPECIFIC DISTRIBUTIONS. SOME OF THE RESULTS THAT ARE OBTAINED GENERALIZE TO THE CASE WHERE JOBS HAVE A RANDOM STRUCTURE. THIS MORE DIFFICULT CASE IS TREATED AT THE END OF THE PAPER.

Journal ArticleDOI
TL;DR: A methodology, and an architecture, which greatly reduces this overhead while maintaining the inherent advantages of the register window approach is introduced, and ways of implemnting traditional stacks and queues, as well as hierarchical storage structures using windows are presented.
Abstract: The organization of large register banks into windows hasbeen shown to be effective in enhancing the performance of sequential programs. One drawback of such an organization, which is of of minor importance to sexsquential languages, is the overhead encountered when the register bank must be replaced during a task switch. With concurrent language paradigms, such as are found in Ada, Occam, and Modula-2, these switches will be more frequent. We introduce here a methodology, and an architecture, which greatly reduces this overhead while maintaining the inherent advantages of the register window approach. In addition, we present ways of implemnting traditional stacks and queues, as well as hierarchical storage structures using windows.

Patent
Matoi Iizuka1, Hitoshi Kubo1
25 Jan 1989
TL;DR: In this paper, the authors propose to simply set up a series of processing function as a task and to make control easy by executing the exclusive control of a resource based upon processing ID.
Abstract: PURPOSE: To simply set up a series of processing function as a task and to make control easy by executing the exclusive control of a resource based upon processing ID. CONSTITUTION: When a user's main task 1 outputs a processing ID generating request to a processing ID generating mechanism 7, a file access control mechanism 3 receives the request and executes exclusive control based upon the processing ID. A resource name and the processing ID are registered in a file access control table 5. Even when the user's main task 1 requests processing included in a user's sub-task 2 and the sub-task 2 accesses a record locked by the main task 1, the sub-task can execute the requested processing without waiting the resource because the processing ID to be used for the sub-task 2 is the same as that of the main task 1. COPYRIGHT: (C)1990,JPO&Japio

01 Jan 1989
TL;DR: A "proof-of-concept" implementation of a dynamic scheduler that is a part of an image understanding task execution environment layered around the PASM parallel processor is completed.
Abstract: An Intelligent Operating System lor Executing Image Understanding Tasks on a Reconfigurable Parallel Architecture {1,2}: We have completed a "proof-of-concept" implementation of a dynamic scheduler that is a part of an image understanding task execution environment layered around the PASM parallel processor. The environment is designed to facilitate "system prototyping" [3]: the experimental process of a user testing various strategies for performing a complex image understanding task by trying different component algorithms, different orderings of algorithms, and different strategies for controlling the selection and sequencing of algorithms. The system uses a database of execution characteristics of pre-written image processing routines, rule-based heuristics, a data dependency representation of the task to be executed, and the current system state to produce and continually update a schedule for the sub tasks that comprise the overall task. Using characteristics such as number of processors, execution mode (SIMD, MIMD), input and output data format, and data allocation, the scheduler selects from

Journal ArticleDOI
TL;DR: The parallel execution of the reconstruction program is achieved with minimal code modifications, by exploiting the fact that each processor runs a full operating system (VMS) and is able to do autonomous random access input/output.