scispace - formally typeset
Search or ask a question

Showing papers on "Task (computing) published in 1992"


Journal ArticleDOI
TL;DR: An approach to solving the task allocation problem using a technique known as simulated annealing is described and a distributed hard real-time architecture is defined and new analysis is presented which enables timing requirements to be guaranteed.
Abstract: A distributed hard real time system can be composed from a number of communicating tasks. One of the difficulties with building such systems is the problem of where to place the tasks. In general there are PT ways of allocating T tasks to P processors, and the problem of finding an optimal feasible allocation (where all tasks meet physical and timing constraints) is known to be NP-Hard. This paper describes an approach to solving the task allocation problem using a technique known as simulated annealing. It also defines a distributed hard real-time architecture and presents new analysis which enables timing requirements to be guaranteed.

367 citations


Patent
Assaf Marron1
04 Dec 1992
TL;DR: In this article, a dynamic software update facility (DSUF) is installed in a data processing system for the purpose of non-disruptively replacing old operating system programs or modules with new updated versions thereof while providing continuous availability and operation of the system.
Abstract: A dynamic software update facility (DSUF) is installed in a data processing system for the purpose of non-disruptively replacing old operating system programs or modules with new updated versions thereof while providing continuous availability and operation of the system. The new versions are loaded into the system along with change instructions providing information controlling the update. Task or process control blocks contain markers indicating the corresponding tasks are safe or unsafe to run the new programs. The markers are set initially to unsafe. A change descriptor table is stored and contains control information derived from the change instructions. When the DSUF is activated, an interrupt handler is installed and traps are stored in the old programs at entry points and safety points therein. Entry point traps are tripped when a task or process enters the old program and interrupts are generated that are handled by the interrupt handler to route tasks which are unsafe to the old program and tasks which are safe to a new program. When all tasks are safe, the new programs replace the old programs. When safety point traps are tripped, a task or process may change its state from unsafe to safe when predetermined conditions are met.

193 citations


Patent
22 May 1992
TL;DR: In this paper, a stochastic priority based scheduler for selecting executable tasks in a computer system is presented, which selects tasks on the basis of a random number weighted by task priority.
Abstract: A stochastic priority based scheduler for selecting executable tasks in a computer system is disclosed. The stochastic priority based scheduler selects tasks on the basis of a random number weighted by task priority. Since every task has a nonzero finite probability of being selected, the probability being proportional to the task priority, all tasks, even low priority ones, have a chance of being selected, thus eliminating the lockout problem.

93 citations


01 Jan 1992
TL;DR: A new debugging paradigm is proposed that easily lends itself to automation and two tasks in this paradigm translate into techniques called dynamic program slicing and execution backtracking, which are discussed and how they can be automated.
Abstract: Programmers spend considerable time debugging code. Symbolic debuggers provide some help but the task still remains complex and difficult. Other than breakpoints and tracing, these tools provide little high level help. Programmers must perform many tasks manually that the tools could perform automatically, such as finding which statements in the program affect the value of an output variable under a given testcase, what was the value of a given variable when the control last reached a given program location, and what does the program do differently under one testcase that it does not do under another. If the debugging tools provided explicit support for such tasks, the whole debugging process would be automated to a large extent. In this dissertation, we propose a new debugging paradigm that easily lends itself to automation. Two tasks in this paradigm translate into techniques called dynamic program slicing and execution backtracking. We discuss what these techniques are and how they can be automated. We present ways to obtain accurate dynamic slices of programs that may involve unconstrained pointers and composite variables. Dynamic slicing algorithms spanning a range of time-space-accuracy trade-offs are presented. We also propose ways in which multiple dynamic slices may be combined to provide further fault localization information. A new space-efficient approach to execution backtracking called "structured backtracking" is also proposed. Our experiment with the above techniques has also resulted in development of a prototype tool, S scPYDER, that explicitely supports them.

82 citations


Patent
27 Jan 1992
TL;DR: In this paper, a multi-task control device is used to switch between a plurality of tasks executed by the CPU by using means other than the CPU including such means for controlling operations needed for comparing priority orders between different tasks, and other means for generating interrupt operations from this control device against the CPU needed for switching tasks being executed in accordance with the result of the priority comparative operations.
Abstract: A computer peripheral device incorporating a multi-task control device which is extremely useful for such programs controlling the microcomputer system. In particular, the multi-task control device effectively controls a plurality of tasks executed by the CPU by using means other than the CPU including such means for controlling operations needed for comparing priority orders between a plurality of tasks, and the other means for generating interrupt operations from this control device against the CPU needed for switching tasks being executed in accordance with the result of the priority comparative operations.

67 citations


Patent
01 Sep 1992
TL;DR: In this paper, an office information system with a plurality of work stations connected via a network to mutually exchange electronic mails each includes a control information definition unit for defining control information representing what kind of processing can be performed on mails after reception, a processing log memory unit for storing log information of operations which have been performed on a received mail, a control unit for guiding a receiver by referring to both the control information and log information.
Abstract: An office information system having a plurality of work stations connected via a network to mutually exchange electronic mails each includes a control information definition unit for defining control information representing what kind of processing can be performed on mails after reception, a processing log memory unit for storing log information of operations which have been performed on a received mail, a control unit for guiding a receiver by referring to both the control information and log information, a task tracking instruction unit for inquiring the processing status of mail, and a task tracking unit for reporting the status in response to such a tracking instruction. The system stores and interprets control information relating to the flow of an OA object on the network, which has been conventionally involved in the memory of an office worker, and guides the office worker to work to be done by the office worker. As a result, a chance for the OA object to stagnate at a certain location is reduced, and the circulation of OA objects on the network is totally improved.

65 citations


Book ChapterDOI
23 Mar 1992
TL;DR: A dynamic and load-balanced task-oriented database query processing approach that minimizes the completion time of user queries that consists of three phases: task generation, task acquisition and execution and task stealing.
Abstract: Most parallel database query processing methods proposed so far adopt the task-oriented approach: decomposing a query into tasks, allocating tasks to processors, and executing the tasks in parallel. However, this strategy may not be effective when some processors are overloaded with time-consuming tasks caused by some unpredictable factors such as data skew. In this paper, we propose a dynamic and load-balanced task-oriented database query processing approach that minimizes the completion time of user queries. It consists of three phases: task generation, task acquisition and execution and task stealing. Using this approach, a database query is decomposed into a set of tasks. At run-time, these tasks are allocated dynamically to available processors. When a processor completes its assigned tasks and no more new tasks are available, it steals subtasks from other overloaded processors to share their load. A performance study was conducted to demonstrate the feasibility and effectiveness of this approach using join query as an example. The techniques that can be used to select task donors from overloaded processors and to determine the amount of work to be transferred are discussed. The factors that may affect the effectiveness, such as the number of tasks to be decomposed to, is also investigated.

65 citations


Patent
02 Mar 1992
TL;DR: In this article, a plurality of validation routines are used to validate the plurality of commands when the validation routines were called, and each of the commands corresponds to a specific type of I/O operation and a specific one of the device memories to participate in the operation with the main memory.
Abstract: An I/O system including a processor, a multitasking operating system and DMA hardware efficiently controls a transfer of data between a main memory and memories of different types of devices by minimizing context switches between tasks and wait times of the tasks. A plurality of validation routines are used to validate a plurality of commands when the validation routines are called. Each of the commands corresponds to a specific type of I/O operation and a specific one of the device memories to participate in the I/O operation with the main memory. Each of the validation routines is device type specific and command type specific. A general routine responds to each of the commands by identifying and calling the validation routine which corresponds to the type of I/O operation and type of device which are specified in the command. The general routine initiates I/O hardware after the validation routine validates the command. After the I/O hardware completes the I/O operation, it signals a command completion routine which is command specific and device type specific. In response, the command completion routine signals to the general routine a state of the I/O operation. Each of the validation routines executes on the same task as the general routine to minimize context switches, and each of the command completion routines executes on a different task than the general routine to minimize wait time for the command completion routine.

61 citations


Patent
30 Mar 1992
TL;DR: In this paper, the authors propose a tool that comprises the first step of providing a first software component, serving as a timing element, for receiving global synchronization commands as input and issuing global simulation scheduler task dispatch commands as output.
Abstract: The tool comprises the first step of providing a first software component, serving as a timing element, for receiving global synchronization commands as input and issuing global simulation scheduler task dispatch commands as output. A second software component is provided, serving as a global simulation scheduler, for receiving the global simulation scheduler task dispatch commands as input, synchronizing discrete event model and continuous model task dispatch as a function of simulation time, and issuing local simulation scheduler task dispatch commands as output. At least a single third software component is provided, serving as a local simulation scheduler, for receiving the local simulation scheduler task dispatch commands as input and issuing local simulation task execution commands as output. The combination of these steps provides a processing environment wherein the local simulation task execution commands invoke user supplied simulation application tasks in a time synchronized manner.

60 citations


Journal ArticleDOI
TL;DR: The task assignment problem for a linear array network is first transformed into the two-terminal network flow problem, and then solved by applying the Goldberg-Tarjan (1987) network flow algorithm in time no worse than O(n/sup 2/m/sup 3/ log n).
Abstract: The problem of assigning tasks to the processors of a distributed computing system such that the sum of execution and communication costs is minimized is discussed. This problem is known to be NP-complete in the general case, and thus intractable for systems with a large number of processors. H.S. Stone's (1977) network flow approach for a two-processor system is extended to the case for a linear array of any number of processors. The task assignment problem for a linear array network is first transformed into the two-terminal network flow problem, and then solved by applying the Goldberg-Tarjan (1987) network flow algorithm in time no worse than O(n/sup 2/m/sup 3/ log n), where n and m are the number of processors and the number of tasks, respectively. >

52 citations


Journal ArticleDOI
01 Nov 1992
TL;DR: It is shown that executing isotropic tasks in a minimum number of steps is equivalent to a matrix decomposition problem, and this property is used to obtain minimum completion time algorithms.
Abstract: We consider a broad class of communication tasks, which we call isotropic, in a hypercube and in a wraparound mesh of processors. These tasks are characterized by a type of symmetry with respect to origin node. We show that executing such tasks in a minimum number of steps is equivalent to a matrix decomposition problem. We use this property to obtain minimum completion time algorithms. For a special communication task, the total exchange problem, we find algorithms that are simultaneously optimal with respect to completion time, and average packet delay. We also prove that a particularly simple type of shortest path algorithm executes isotropic tasks in time which is optimal within a small bound.

Patent
26 Aug 1992
TL;DR: In this paper, a multi-media user task (host) computer is interfaced to a high speed digital signal processor DSP which provides support functions to the host computer via an interprocessor DMA bus master and controller.
Abstract: A multi-media user task (host) computer is interfaced to a high speed digital signal processor DSP which provides support functions to the host computer via an interprocessor DMA bus master and controller. Support of multiple dynamic hard real-time signal processing task requirements are met by posting signal processor support task requests from the host processor through the interprocessor DMA controller to the signal processor and its operating system. The signal processor builds data transfer packet request execution lists in a partitioned queue in its own memory and executes internal signal processor tasks invoked by users at the host system by extracting signal sample data from incoming data packets presented by the interprocessor DMA controller in response to its execution of the DMA packet transfer request queues built by the signal processor in the partitioned queue. Processed signal values etc. are extracted from signal processor memory by the DMA interprocessor controller executing the partitioned packet request lists and delivered to the host processor. A very large number of packet transfers in support of numerous user tasks and implementing a very large number of DMA channels is thus made possible while avoiding the need for arbitration between the channels on the part of the signal processor or the host processor.

Patent
Kohichi Yoshida1, Mikio Ogisu1
22 Dec 1992
TL;DR: In this paper, a common set of a control register group and a queue selection control part are provided for plural task execution queue for executing plural tasks in a time-shared manner, so that the plural tasks may be processed concurrently.
Abstract: A time-shared multitask execution device processes two or more tasks in time-shared manner by using one CPU. A common set of a control register group and a queue selection control part are provided for plural task execution queue for executing plural tasks. Under the control of the control register group and queue selection control part, one CPU is occupied by the plural task execution queues in a time-shared manner, so that the plural tasks may be processed concurrently. The user has only to set the information necessary for the register in the control register group according to the user's specification, and the task execution specification can be determined. Therefore, even in the case of a user with a limited knowledge about the program, the desired specification can be set only by setting a flag bit of information to 1 or 0 in a register which is a part of the "hardware".

Patent
30 Jun 1992
TL;DR: In this paper, the authors propose a mechanism to prevent a decrease in performance by performing other processes by a processor in a queuing state until an accessor is permitted, where the processor after the return can freely perform other processes thereafter.
Abstract: PURPOSE:To prevent a decrease in performance by performing other processes by a processor in a queuing state until an accessor is permitted. CONSTITUTION:When addition to a queue area 62 ends, a processor 11 releases a memory bus 2 which is locked and makes a return. Thus, the processor 11 after the return can freely perform other processes thereafter. When there is a queue, queue operation is performed, a value for identifying a processor or task is taken out of the head of the queue area 62 to identify a processor 1n to be placed in operation, and an interruption is generated and sent to the processor 1n through a signal line 7 to inform the processor 1n that a shared memory 3 is released.

Proceedings ArticleDOI
09 Nov 1992
TL;DR: A comprehensive approach is given to improving the software maintenance process based on task analysis, where different methods are presented for determining maintenance tasks and their relationships.
Abstract: A comprehensive approach is given to improving the software maintenance process based on task analysis. Techniques are presented for determining maintenance tasks and their relationships. Different methods are then used to derive necessary information to perform these tasks. This information can then be placed in a database and used by tools for dissemination to software engineers to help them perform maintenance tasks. Case study results for the application of this process are described. >

Patent
Kirk I. Hays1, Wayne D. Smith1
21 Dec 1992
TL;DR: An improved method and apparatus for utilizing Translation Lookaside Buffers (TLB) for maintaining page tables in a paging unit on a computer system is described in this paper.
Abstract: An improved method and apparatus for utilizing Translation Lookaside Buffers (TLB) for maintaining page tables in a paging unit on a computer system. TLB contents for executing tasks are retained when the task is swapped out. The contents are then reloaded into the TLB when the task is again scheduled for execution. Spare memory cycles are utilized to transfer outgoing TLB data into memory, and incoming TLB data for a next scheduled task from memory.

Book ChapterDOI
16 Dec 1992
TL;DR: An on-line algorithm, RR, is formulated, which is an ideal version of so-called Round Robin algorithm, and it is proved that, when tasks may arrive at different times, the competitve ratio of RR is between 2(k−1)/H k −1 and 2( k−1), where k is the maximal number of tasks that can exist at any given time.
Abstract: We investigate on-line algorithms that schedule preemptive tasks on single processor machines when the arrival time of a task is not known in advance and the length of a task is unknown until its termination. The goal is to minimize the sum of the waiting times over all tasks. We formulate an on-line algorithm, RR, which is an ideal version of so-called Round Robin algorithm. It is known that, if all tasks arrive at one time, RR is 2-competitive [W]. We prove that, when tasks may arrive at different times, the competitve ratio of RR is between 2(k−1)/H k −1 and 2(k−1), where k is the maximal number of tasks that can exist at any given time. Our analysis also yields bounds on the sum of response times, and through several criteria we demonstrate the effectiveness of Round Robin algorithm.

Patent
27 Nov 1992
TL;DR: In this article, a succession of operation phases executed by a set of concurrent tasks of the real-time executive, under the direction of a driver task which organizes the working of the concurrent tasks, and which regulates launching and detects completion thereof, is presented.
Abstract: The method for testing and, if necessary, validating the primitives of a real-time executive embodying the invention consists in having a succession of operation phases executed by a set of concurrent tasks of the real-time executive, under the direction of a driver task which organizes the working of the concurrent tasks, and which regulates launching and detects completion thereof. Each operation phase comprises the random selection of a launching mode, launching of the concurrent tasks, freeing from their standby status, detection of the arrival of each of the concurrent tasks at the end of its route and memorization of characteristic primitives of the operation phase that has just taken place. The invention enables validation of the real-time executives used notably in the fields of aeronautics, space, nuclear sector, railways, medicine and even robotics.

Patent
18 Jun 1992
TL;DR: In this paper, an inter-task synchronizing communication equipment provided with a means for decreasing information in a queue buffer is provided, where the number of communication data stored in queue buffer 1 is stored in a counter 2 and based on the value of the counter 2, a priority control part 3 refers to a priority managing table 4 registering the priority of producer/consumer tasks 6 and 7 to be transmitted/received through the queue buffer.
Abstract: PURPOSE:To provide an inter-task synchronizing communication equipment provided with a means for decreasing information in a queue buffer CONSTITUTION:The number of communication data stored in a queue buffer 1 is stored in a counter 2 and based on the value of the counter 2, a priority control part 3 refers to a priority managing table 4 registering the priority of producer/consumer tasks 6 and 7 to be transmitted/received through the queue buffer 1 By the control of the priority control part 3, the execution tasks of the producer/consumer tasks 6 and 7 are switched by a dispatcher 5

Patent
30 Sep 1992
TL;DR: In this paper, a method for handling a frame overrun wherein the tasks cannot be processed within the frame is presented, where each client must determine the correct action to take, including restarting where they left off, restarting from the beginning, or quitting.
Abstract: In a computer system having a digital signal processor for processing a number of tasks within a frame, a method for handling a frame overrun wherein the tasks cannot be processed within the frame. First, the frame overrun is detected. Next, each of the tasks are compared with a processing time which had been allocated to it. A determination is made as to which of these tasks had exceeded its allotted processing time by the greatest amount. The worst case client is notified that its task has caused an overrun. All other non-system task clients are notified that a overrun has occurred. All but system support tasks are inactivated, and processing continues. Each client must determine the correct action to take, including restarting the tasks where they left off, restarting from the beginning, or quitting. Methods for handling more serious overruns are also described.

Journal ArticleDOI
TL;DR: An experiment was devised to test how well users could utilize the mappings of inefficient commands to help messages from the neural net.
Abstract: In the UNIX operating system, many complex operations can be done using a single command line, in the most efficient method, or they can be done using several command lines of simple commands, in a less efficient method. Recognizing when and how to use these efficient commands is difficult for novice users and for many experts. Five UNIX‐based tasks were constructed that could be done using many simple commands, or they could be performed using one or two more complex and efficient commands. Subjects were asked to perform these tasks using the most efficient methods they could. Many different command sequences were generated from these subjects. These data were then used in a neural net model to map the commands to message markers for task assistance. An experiment was devised to test how well users could utilize the mappings of inefficient commands to help messages from the neural net. In the neural net assisted condition, subjects received assistance for the most efficient command whenever the neural ne...

Proceedings ArticleDOI
01 Mar 1992
TL;DR: Several space servicing tasks utilizing the cooperative dual-arm control capability are described, and experimental results from the tasks are given.
Abstract: A dual-arm task execution primitive has been implemented for cooperative dual-arm telerobotic task execution utilizing multiple sensors concurrently. The primitive has been integrated into a telerobot task execution system and can be called by a task planning system for execution of tasks requiring dual-arm sensor based motion, e.g., force control, teleoperation, and shared control. The primitive has a large input parameter set which is used to specify the desired behavior of the motion. Move-squeeze decomposition is utilized to decompose forces sensed at the wrists of the two manipulators into forces in the move subspace, which cause system motion, and forces in the squeeze subspaces, which cause internal forces. The move and squeeze forces are then separately controlled. Several space servicing tasks utilizing the cooperative dual-arm control capability are described, and experimental results from the tasks are given. The supervisory and shared control tasks include capture of a rotating satellite, orbital replacement unit changeout, fluid coupler seating and locking, and contour following.

Patent
07 May 1992
TL;DR: In this paper, a shared resources exclusive control processing is proposed to attain the outrunning processing for acquisition of the resources by adding the priority to the resources aquiring requests issued by the tasks.
Abstract: PURPOSE: To attain the outrunning processing for acquisition of the resources by adding the priority to the resources aquiring requests issued by the tasks. CONSTITUTION: The tasks A, B...n show the processor groups which aquire and release the shared resources for use of these resources like the memories, etc. A shared resources exclusive control processing part 12 excludes the simultaneous accesses of plural tasks to a single shared resources based on a queue. The priority is added to the resources acquiring requests of the tasks when they are generated. Thus, the part 12 carries out the exclusive control of the resources acquiring requests based on the priority. Then, the resources acquiring cycle time can be given to the resources acquiring requests when they are generated for each task. Therefore, a resources acquisition processing part 13 enables the tasks having the resources acquiring requests to periodically acquire the resources. Then, even the tasks issuring a request later can outrun the task that issues a request earlier according to the processing priority to acquire the resources. COPYRIGHT: (C)1993,JPO&Japio

Journal ArticleDOI
TL;DR: A multilayered model is proposed: a control layer which directly acts on the dynamics of the manipulator, a coordination/communication layer which makes all the manipulators work together and a programming layer which interfaces with the user.

Proceedings ArticleDOI
01 Jan 1992
TL;DR: Two new enhancements to the program Design Manager's Aide for Intelligent Decomposition are discussed, including rules for ordering a complex assembly process and rules for determining which analysis tasks must be re-executed to compute the output of one task based on a change in input to that or another task.
Abstract: This paper discusses the addition of two new enhancements to the program Design Manager's Aide for Intelligent Decomposition (DeMAID). DeMAID is a knowledge-based tool used to aid a design manager in understanding the interactions among the tasks of a complex design problem. This is done by ordering the tasks to minimize feedback, determining the participating subsystems, and displaying them in an easily understood format. The two new enhancements include (1) rules for ordering a complex assembly process and (2) rules for determining which analysis tasks must be re-executed to compute the output of one task based on a change in input to that or another task.

Patent
30 Jul 1992
TL;DR: In this paper, an improved system for efficiently controlling plural image processing apparatus to enhance the operation rate of each apparatus is presented. But the system is limited to a set of tasks, and each task is assigned to one of the image processing modules according to predetermined priority.
Abstract: The invention provides an improved system for efficiently controlling plural image processing apparatus to enhance the operation rate of each apparatus. The system includes: plural image processing modules (11 through 15) identically constructed to execute some predetermined image processing programs; a communication control unit (31); and a module management unit (41). The module management unit registers contents of plural sets of task to be executed, and allocates a job of each task to one of the plural image processing modules according to predetermined priority of the jobs. The communication control unit transmits data or information between each module and the module management unit.

Proceedings ArticleDOI
01 Dec 1992
TL;DR: The author shows how to parallelize tree-structured computations for d-dimensional (d>or=1) mesh-connected arrays of processors with linear time algorithms for partitioning and mapping the task tree T onto a p/sup 1/d/*.
Abstract: The author shows how to parallelize tree-structured computations for d-dimensional (d>or=1) mesh-connected arrays of processors. A tree-structured computation T consists of n computational tasks whose dependencies form a task tree T of n constant degree nodes. Each task can be executed in unit time and sends one value to its parent task after it has been executed. The author presents linear time algorithms for partitioning and mapping the task tree T onto a p/sup 1/d/*. . .*p/sup 1/d/ mesh-connected array of processors so that one can schedule the processors to perform computation T in O(n/p) time, for p >

Patent
Mark F. Anderson1
14 Oct 1992
TL;DR: In this paper, a multitasking data processing system is provided with a hardware-configured operating system kernel, which includes a processor queue (20) that includes a plurality of word stores (52,54), each word store storing a task name, in execution priority order, ready for processing.
Abstract: A multitasking data processing system is provided with a hardware-configured operating system kernel. The system includes a processor queue (20) that includes a plurality of word stores (52,54), each word store (52,54) storing a task name, in execution priority order, that is ready for processing. An event queue (24) in the kernel includes a plurality of word stores (52,54) for storing task names that await the occurrence of an event to be placed in the processor queue (20). When an associated processor (10) signals the occurrence of an event, matching logic (77) searches all word stores (52,54) in the event queue (24), in parallel, to find a task associated with the signalled event and then transfers the task to the processor queue (20). Shift logic (73) is also provided for simultaneously transferring a plurality of task names, in parallel, in the processor queue (20) to make room for a task name transferred from the event queue (24).

Proceedings ArticleDOI
09 Jun 1992
TL;DR: A hierarchical task queue organization that avoids the task queue bottleneck associated with the centralized organization and provides performance better than centralized and distributed organizations is proposed and is suitable for large parallel systems.
Abstract: A hierarchical task queue organization that avoids the task queue bottleneck associated with the centralized organization and provides performance better than centralized and distributed organizations is proposed. A detailed performance analysis shows that the hierarchical organization is less sensitive to parameters like the branching factor and transfer factor. Therefore, it is suitable for large parallel systems. >

Patent
07 Aug 1992
TL;DR: In this paper, a message transmission mechanism is used together with redundant constitution built in the common memory and fault-tolerant message transmission is guaranteed which automatically replaces a task operating on the 1st processor firstly with a backup task executed on the 2nd processor.
Abstract: PURPOSE: To provide a method for transmitting a message between tasks which run on plural processors in environment wherein the processors are connected mutually by the common intelligent memory. CONSTITUTION: For message transmission between tasks, a means which stores a message sent from a transmission-side task is provided in the common intelligent memory and each processor includes a service means for sending the message to the task executed by the processor. With one set of high-level microcode type commands, the message is transmitted from one processor to the common intelligent memory and further transmitted from the common intelligent memory to another processor. This transmission mechanism is used together with redundant constitution built in the common memory and if a 1st processor gets out of order, fault-tolerant message transmission is guaranteed which automatically replaces a task operating on the 1st processor firstly with a backup task executed on the 2nd processor.