scispace - formally typeset
Search or ask a question

Showing papers on "Task (computing) published in 1976"


Proceedings ArticleDOI
29 Mar 1976
TL;DR: In this paper, the worst-case performance bound for LPT sequencing approaches unity approximately as 1 + 1/k, where k is the least number of tasks on any processor, or where K is the number on a processor whose last task terminates the schedule.
Abstract: In this paper we shall generalize Graham's result so as to include a parameter characterizing the number of tasks assigned to processors by the LPT rule. The new result will show that the worst-case performance bound for LPT sequencing approaches unity approximately as 1+1/k, where k is the least number of tasks on any processor, or where k is the number of tasks on a processor whose last task terminates the schedule. Thus, we shall have a result very similar to the parameterized bounds for bin-packing heuristics [JDUGG]. We shall also obtain out of the analysis an alternate proof of Graham's result.

65 citations


Patent
Glenn Y. Louie1
28 Jul 1976
TL;DR: In this article, the dispatcher receives a plurality of dispatcher requests and determines the highest priority request and then selects the appropriate program routine, each routine is divided into segments, and the status of the routines are stored in registers.
Abstract: A digital processor programmed to perform multi-tasks which includes a hardware dispatcher for selecting tasks. The dispatcher receives a plurality of dispatcher requests and determines the highest priority request. The dispatcher then selects the appropriate program routine. Each routine is divided into segments, and the status of the routines are stored in registers. When a routine is selected, the appropriate segment in the routine is also selected. At the end of each segment the dispatcher requests are re-examined.

49 citations


Proceedings ArticleDOI
13 Oct 1976
TL;DR: An abstract, Markov-like model is used to describe the reliability behavior of SIFT, a fault-tolerant computer in which fault tolerance is achieved primarily by software mechanisms.
Abstract: The SIFT (Software Implemented Fault Tolerance) computer is a fault-tolerant computer in which fault tolerance is achieved primarily by software mechanisms. Tasks are executed redundantly on multiple, independent processors that are loosely synchronized. Each processor is multiprogrammed over a set of distinct tasks. A system of independently accessible busses interconnects the processors. When Task A needs data from Task B, each version of A votes, using software, on the data computed by the different versions of B. (A processor cannot write into another processor; all communication is accomplished by reading.) Thus, errors due to a malfunctioning processor or bus can be confined to the faulty unit and can be masked, and the faulty unit can be identified. An executive routine effects the fault location and reconfigures the system by assigning the tasks, previously assigned to the faulty unit, to an operative unit. Since fault-tolerant computers are used in environments where reliability is at a premium, it is essential that the software of SIFT be correct. The software is realized as a hierarchy of modules in a way that significantly enhances proof of correctness. The behavior of each module is characterized by a formal specification, and the implementation of the module is verified with respect to its specification and those of modules at lower level of the hierarchy. An abstract, Markov-like model is used to describe the reliability behavior of SIFT. This model is formally related to the specifications of the top-most modules of the hierarchy; thus the model can be shown to describe accurately the behavior of the system. At the time of writing, the verification of the system is not complete. The paper describes the design of SIFT, the reliability model, and the approach to mapping from the system to the model.

38 citations


Journal ArticleDOI
P. Kunz1
TL;DR: A programmable processor, the 168 E, has been designed for use in the LASS multi-processor system; it has an execution speed comparable to the IBM 370 168 and uses the subsets of IBM 370 instructions appropriate to the Lass analysis task.

36 citations


Patent
28 Apr 1976
TL;DR: In this paper, a device for coupling several data processing units to a central memory, the processing units being each provided with means for exchanging data with peripheral devices and with computing means, the device includes management means for controlling the sequential execution of orders issued from the memory, according to a hierarchical classification of the lists of tasks for the exchange processes and the computing process and further includes devices for commutation upon control of decoding means of a signal produced by processing units, in accordance with the value of a binary digit associated to a task of a fresh list of actuatable tasks established in a non
Abstract: A device for coupling several data processing units to a central memory, the processing units being each provided with means for exchanging data with peripheral devices and with computing means, the device includes management means for controlling the sequential execution of orders issued from the memory, according to a hierarchical classification of the lists of tasks for the exchange processes and the computing process and further includes devices for commutation upon control of decoding means of a signal produced by the processing units, in accordance with the value of a binary digit associated to a task of a fresh list of actuatable tasks established in a non-hierarchical system.

18 citations


Journal ArticleDOI
TL;DR: In a functionally distributed computer system, the system function is partitioned into less complex functions which reside at decreasing functional hierarchy levels, and all software and hardware required to implement an identified function are confined to a node of the system.
Abstract: In a functionally distributed computer system, the system function is partitioned into less complex functions which reside at decreasing functional hierarchy levels. At some point in the partitioning process, all software and hardware required to implement an identified function are confined to a node of the system. The type of hardware elements and the form of the software required at the node are determined by the node function. This principle is illustrated in the case of a task scheduler for the common node of a distributed function laboratory computer system having a star-like configuration.

10 citations


Journal ArticleDOI
01 Jul 1976
TL;DR: A computer program has been developed which provides a framework within which to develop and compare both analytic and simulation models and a general queuing network architecture has been adopted, which was viewed as a network of servers preceded by queues.
Abstract: Analytic models potentially provide an elegant means of estimating the performance of computer systems at a rather nominal cost. The assumptions which must be made about a system in order to construct a mathematically tractable model, however, are usually severe enough to cast doubt on the validity of the results produced by such a model. Further, relaxation of an assumption often renders an analytic technique inapplicable, such that simulation becomes the preferred approach to system modeling.To investigate the boundary between analytic models and simulation models a computer program has been developed which provides a framework within which to develop and compare both analytic and simulation models. A general queuing network architecture has been adopted. The program is extensible with respect to analytic techniques, queuing disciplines and probability density functions.Using this program, models of a lightly loaded batch multiprogramming system were constructed. The system was viewed as a network of servers preceded by queues. The central processing unit was one server and each I/O channel was treated as a separate server. The channels were organized in parallel with respect to each other, and in series with respect to the central processing unit. Tasks were viewed as tokens flowing through the network, being delayed by each server for queuing delays and service. Delays between task termination and initiation of a new task were modeled by additional servers operating in parallel to the I/O channel servers. An analytic solution was computed for this model for the steady state case with exponentially distributed service times and the first-come-first-served queuing discipline with unlimited capacity queues. Typical errors of 18% were observed for cpu utilization and I/O activity. The model was then modified to more realistically account for the behavior of partitions. The delay servers were modeled with unit capacity queues. This model was solved by simulation, yielding errors of 4% for cpu utilization and I/O activity.

4 citations


Journal ArticleDOI
TL;DR: The paper discusses the system programmer's environment and some experiences during the development of the operating system, which is structured exactly like a customer task with only a few special privileges.
Abstract: The Multiple Console Time Sharing System, MCTS, implemented at the General Motors Research Laboratories is described. An overview of the logical and physical structures of the operating system is presented.Significant concepts embodied in the operating system are discussed. The system multi-programs among a variable number of tasks, each in an independent virtual memory. A virtual input/output access method is the only one provided. All input/output is performed by polling, no interrupts are used. The operating system is structured exactly like a customer task with only a few special privileges.The MCTS operating system is conceptually simple. The system programmers write in a PL/I-like languaged called MALUS. The system programmers work in an environment very similar to the environment normally associated with an application programmer. The paper discusses the system programmer's environment and some experiences during the development of the operating system.