scispace - formally typeset
Search or ask a question

Showing papers on "Task (computing) published in 1990"


Proceedings ArticleDOI
01 May 1990
TL;DR: This paper rejects the simpler load-based inlining method, where tasks are combined based on dynamic load level, in favor of the safer and more robust lazy task creation method, which allows efficient execution of naturally expressed algorithms of a substantially finer grain than possible with previous parallel Lisp systems.
Abstract: Many parallel algorithms are naturally expressed at a fine level of granularity, often finer than a MIMD parallel system can exploit efficiently. Most builders of parallel systems have looked to either the programmer or a parallelizing compiler to increase the granularity of such algorithms. In this paper we explore a third approach to the granularity problem by analyzing two strategies for combining parallel tasks dynamically at run-time. We reject the simpler load-based inlining method, where tasks are combined based on dynamic load level, in favor of the safer and more robust lazy task creation method, where tasks are created only retroactively as processing resources become available.These strategies grew out of work on Mul-T [14], an efficient parallel implementation of Scheme, but could be used with other applicative languages as well. We describe our Mul-T implementations of lazy task creation for two contrasting machines, and present performance statistics which show the method's effectiveness. Lazy task creation allows efficient execution of naturally expressed algorithms of a substantially finer grain than possible with previous parallel Lisp systems.

344 citations


Journal ArticleDOI
TL;DR: Examination of the effects of choice-related variables on the work performance of adults with severe handicaps indicated that clients attended to work tasks almost twice as much when they chose their tasks and when assigned to work on preferred tasks versus when assignedto work on nonpreferred tasks.
Abstract: We evaluated the effects of several choice-related variables on the work performance of adults with severe handicaps. After assessing client work preferences, three choice-related situations were presented: (a) providing clients with the opportunity to choose a work task, (b) assigning a preferred task, and (c) assigning a nonpreferred task. Results indicated that clients attended to work tasks almost twice as much when they chose their tasks and when assigned to work on preferred tasks versus when assigned to work on nonpreferred tasks. Results are discussed regarding the need to assess systematically the effects of choice-related variables.

157 citations


Patent
30 Oct 1990
TL;DR: In this paper, a two level lock management system is used to prevent data corruption due to unsynchronized data access by the multiple processors in a multi-processor computer system, where each processor is under the control of separate system software and access a common database.
Abstract: A multi-processor computer system in which each processor is under the control of separate system software and access a common database. A two level lock management system is used to prevent data corruption due to unsynchronized data access by the multiple processors. By this system, subsets of data in the database are assigned respectively different lock entities. Before a task running on one of the processors access data in the database it first requests permission to access the data in a given mode with reference to the appropriate lock entity. A first level lock manager handles these requests synchronously, using a simplified model of the locking system having shared and exclusive lock modes to either grant or deny the request. All requests are then forwarded to a second level lock manager which grants or denies the requests based on a more robust model of the locking system and queues denied requests. The denied requests are granted, in turn, as the tasks which have been granted access finish processing data in the database.

153 citations


Patent
31 May 1990
TL;DR: In this article, a plurality of task dispatching elements (TDEs) forming a task dispatch queue are scanned in an order of descending priority, for either a specific affinity to a selected one of the processing devices, or a general affinity to all of the processors.
Abstract: In connection with an information processing network in which multiple processing devices have individual cache memories and also share a main storage memory, a process is disclosed for allocating multiple data operations or tasks for subsequent execution by the processing devices. A plurality of task dispatching elements (TDE) forming a task dispatching queue are scanned in an order of descending priority, for either a specific affinity to a selected one of the processing devices, or a general affinity to all of the processing devices. TDEs with specific affinity are assigned immediately if the selected processor is available, while TDEs of general affinity are reserved. TDEs with a specific affinity are bypassed if the selected processor is not available, or reserved if a predetermined bypass threshold has been reached. Following the primary scan a secondary scan, in an order of ascending priority, assigns any reserved tasks to the processing devices still available, without regard to processor affinity. Previously bypassed tasks can be assigned as well, in the event that any processor remains available. A further feature of the network is a means to reset the processor affinity of a selected task from the specific affinity to the general affinity. Resetting is accomplished through an assembly level instruction contained in the task, and either can be unconditional, with reset occurring whenever the task is executed on one of the processing devices, or can occur only upon the failure to meet a predetermined condition while the task is executing.

127 citations


Patent
03 Apr 1990
TL;DR: In this paper, a distributed office automation system includes workstations and support stations which are interconnected via a network and which make use of the functionality of one another by subcontracting tasks.
Abstract: A distributed office automation system includes workstations and support stations which are interconnected via a network and which make use of the functionality of one another by subcontracting tasks. Various function modules are available in the system for numerous tasks and the system provides a distributed organization structure in which it is always clear what function module is required to perform a specifc task. Each of the stations is provided with a coordination unit which is continually aware of the state of the total system and which designates the required function module.

111 citations


Patent
Alaiwan Haissam1
27 Feb 1990
TL;DR: In this paper, the authors propose a message passing mechanism for a plurality of processors interconnected by a shared intelligent memory for secure passing of messages between tasks operated on said processors, where each processor includes serving means for getting the messages to the task operated by each processor.
Abstract: In the environment of a plurality of processors interconnected by a shared intelligent memory, a mechanism for the secure passing of messages between tasks operated on said processors is provided. Inter-task message passing is provided by shared intelligent memory for storing the messages transmitted by sending tasks. Further, each processor includes serving means for getting the messages to be sent to the task operated by said each processor. The passing of messages from a processor to the shared intelligent memory and from the latter to another processor is made, using a set of high-level microcoded commands. A process is provided using the message passing mechanism together with redundancies built into the shared memory, to ensure fault-tolerant message passing in which the tasks operated primarily on a processor are automatically replaced by back-up tasks executed on another processor if the first processor fails.

106 citations


Patent
29 Jan 1990
TL;DR: In this article, the authors propose a prioritization scheme based on the operational proximity of the request to the instruction currently being executed, which temporarily suspends the higher priority request while the desired data is being retrieved from main memory 14, but continues to operate on a lower priority request so that the overall operation will be enhanced if the lower priority requests hits in the cache 28.
Abstract: In a pipelined computer system, memory access functions are simultaneously generated from a plurality of different locations. These multiple requests are passed through a multiplexer 50 according to a prioritization scheme based upon the operational proximity of the request to the instruction currently being executed. In this manner, the complex task of converting virtual-to-physical addresses is accomplished for all memory access requests by a single translation buffer 30. The physical addresses resulting from the translation buffer 30 are passed to a cache 28 of the main memory 14 through a second multiplexer 40 according to a second prioritization scheme based upon the operational proximity of the request to the instruction currently being executed. The first and second prioritization schemes differ in that the memory is capable of handling other requests while a higher priority "miss" is pending. Thus, the prioritization scheme temporarily suspends the higher priority request while the desired data is being retrieved from main memory 14, but continues to operate on a lower priority request so that the overall operation will be enhanced if the lower priority request hits in the cache 28.

85 citations


Journal ArticleDOI
M. Seetha Lakshmi1, Philip S. Yu1
TL;DR: The effectiveness of parallel processing of relational join operations is examined and the skew in the distribution of join attribute values and the stochastic nature of the task processing times are identified as the major factors that can affect the effective exploitation of parallelism.
Abstract: The effectiveness of parallel processing of relational join operations is examined. The skew in the distribution of join attribute values and the stochastic nature of the task processing times are identified as the major factors that can affect the effective exploitation of parallelism. Expressions for the execution time of parallel hash join and semijoin are derived and their effectiveness analyzed. When many small processors are used in the parallel architecture, the skew can result in some processors becoming sources of bottleneck while other processors are being underutilized. Even in the absence of skew, the variations in the processing times of the parallel tasks belonging to a query can lead to high task synchronization delay and impact the maximum speedup achievable through parallel execution. For example, when the task processing time on each processor is exponential with the same mean, the speedup is proportional to P/ln(P) where P is the number of processors. Other factors such as memory size, communication bandwidth, etc., can lead to even lower speedup. These are quantified using analytical models. >

85 citations


Patent
19 Dec 1990
TL;DR: A queue manager for controlling the execution of requests for the transport of messages from users to destinations is proposed in this paper, which includes a queue for storing pending requests and a dispatcher task for creating a worker task to execute each request.
Abstract: A queue manager for controlling the execution of requests for the transport of messages from users to destinations. Each request includes a message and an identification of a destination. The queue manager includes a queue for storing pending requests and a dispatcher task for creating a worker task to execute each request and provides a method for adapting the execution of requests to constraints and characteristics of destinations and communications links.

76 citations


Proceedings ArticleDOI
Akihiro Matsumoto1, Hajime Asama, Y. Ishida, Koichi Ozaki, I. Endo 
03 Jul 1990
TL;DR: The message protocol is extended and the concept of negotiation is introduced as the framework for the cooperative work of the task allocation problem, applied to object-pushing problem as an example.
Abstract: The new concept for an advanced robot system, ACTRESS (ACTor-based robots and equipments synthetic system), has been designed for the automation of high level tasks. The system assumes that multiple robots and equipments, which are called robotors in general, work independently or cooperatively by communicating each other, depending on the task requirements. The paper discusses the distributed robot system architecture. The basic design of protocol is shown for the communication between robotors in order to achieve various tasks. The message protocol is extended and the concept of negotiation is introduced as the framework for the cooperative work of the task allocation problem. The message protocol has been applied to object-pushing problem as an example. >

62 citations


Journal ArticleDOI
TL;DR: Topology is a programming and operating system construct that allows programmers to describe and efficiently implement such functionality as distributed objects with well-defined operational interfaces and have been used with several, large-scale parallel application programs.
Abstract: Application programs written for large-scale multicomputers with interconnection structures known to the programmer (e.g., hypercubes or meshes) use complex communication structures for connecting the applications' parallel tasks. Such structures implement a wide variety of functions, including the exchange of data or control information relevant to the task computations and/or the communications required for task synchronization, message forwarding/filtering under program control, and so on. Topology is a programming and operating system construct that allows programmers to describe and efficiently implement such functionality as distributed objects with well-defined operational interfaces. As with abstract data types, topologies may be reused by any application desiring their functionality. However, in contrast to other research in parallel or distributed object-based operating systems, internally, a topology may be an entirely distributed implementation of the object's functionality, consisting of a communication graph and type-specific computations, which are triggered by messages traversing the graph. Sample computations may perform additions or minimizations of the values traversing a topology, thereby computing a global sum or minimum. Similarly, computations may concatenate or filter messages in order to implement program monitoring, I/O, file storage, or virtual terminal services.Topologies are implemented as an extension of the Intel iPSC hypercube's operating system kernel and have been used with several, large-scale parallel application programs.

Proceedings ArticleDOI
02 Dec 1990
TL;DR: The paper presents a solution of the class control driven which realizes execution replay on distributed memory architectures and is adapted to nonblocking primitives, and is not dependent on any form of message passing communication.
Abstract: Debugging parallel programs on MIMD machines is a difficult task because successive executions of the same program can lead to different behaviors. To solve this problem, a method called execution replay has been introduced, which guarantees the reexecution of a program to be equivalent to the initial execution. Most of execution replay techniques proposed until now may be named 'data driven techniques'. Such techniques are relatively easy to implement in the case of the most common communication primitives. However, the time needed to record the large amount of required information is significant, which might modify the initial execution. Execution replay becomes in this case meaningless. Another class of execution replay named control driven execution replay allows one to limit the amount of recorded information. The paper presents a solution of the class control driven which realizes execution replay on distributed memory architectures. In contrary to all other proposed approaches, the technique is adapted to nonblocking primitives, and is not dependent on any form of message passing communication. >

Proceedings ArticleDOI
04 Nov 1990
TL;DR: Three multi-step teleoperation tasks were successfully modeled with a hidden Markov model (HMM) and the addition of multi-dimensional sensor information significantly improved the ability of the Viterbi decoding algorithm to identify the series of events.
Abstract: Three multi-step teleoperation tasks were successfully modeled with a hidden Markov model (HMM). Tasks that would proceed through different paths as determined by an event either internal or external to the task were designed. The tasks can be described by a state transition diagram containing a fork through which two alternative task outcomes can be followed. The model was used to identify correctly the sequence of task progression from the recorded sensor data. Previous work with HMMs was extended by generalizing the model to encompass multi-dimensional sensor signals consisting of a mix of force, torque, and position signals. The addition of multi-dimensional sensor information significantly improved the ability of the Viterbi decoding algorithm to identify the series of events. >

Journal ArticleDOI
TL;DR: Examining problems associated with the application of load-store RISC architectures with large register sets or compiler-driven register assignment to realtime system design methodologies involving many tasks and frequent context switches finds the threaded windows concept introduced as an efficient mechanism for managing register resources.

Patent
Richard D. Pribnow1
27 Nov 1990
TL;DR: In this article, the maintenance modes of operation of a multiprocessing vector supercomputer system are disclosed, which allow diagnostics to run on a failed portion of the system while simultaneously allowing user tasks to run in a degraded performance mode.
Abstract: Maintenance modes of operation of a multiprocessing vector supercomputer system are disclosed. The modes allow diagnostics to run on a failed portion of the system while simultaneously allowing user tasks to run in a degraded performance mode. This is accomplished by assigning a processor or a group of processors to run diagnostics on an assigned portion of memory, while the operating system and user tasks are run in the remaining processors in the remaining portion of memory. In this manner, the diagnostics can isolate the problem without requiring complete shut down of the user task, while at the same time protecting the integrity of the operating system. The result is significantly reduced preventive maintenance down time, more efficient diagnosis of hardware failures, and a corresponding increase in user task run time.

Proceedings ArticleDOI
12 Mar 1990
TL;DR: Durra is a language designed to support the development of distributed applications consisting of multiple, concurrent, large-grained tasks executing in a heterogeneous network.
Abstract: Durra is a language designed to support the development of distributed applications consisting of multiple, concurrent, large-grained tasks executing in a heterogeneous network. An application-level program is written in Durra as a set of task descriptions that prescribes a way to manage the resources of a heterogeneous machine network. The application describes the tasks to be instantiated and executed as concurrent processes, the intermediate queues required to store the messages as they move from producer to consumer processes, and the possible dynamic reconfigurations of the application. The application-level programming paradigm fits a top-down, incremental method of software development very naturally. It is suggested that a language like Durra would be of great value in the development of large, distributed systems. >

Proceedings ArticleDOI
13 May 1990
TL;DR: A proposed implementation of this sensory-motor management facility based on the input-output-timed-automata (IOTA) abstraction is introduced, which acts as a scheduler for various motor requests submitted by the active sensor and manipulation systems.
Abstract: Consideration is given to the management of conflicting demands on the use of the motor units of the robot from the active sensors and manipulatory processes. An operating-system-like facility is introduced for managing the conflicting motor requests produced by multiple active sensing and manipulation tasks. A proposed implementation of this sensory-motor management facility based on the input-output-timed-automata (IOTA) abstraction is introduced. The IOTA abstraction is general enough to allow the enforcement of temporal constraints and to permit the specification and modification of task priorities based on the goals and current state of the robot. The IOTA-based sensory-motor operating system acts as a scheduler for various motor requests submitted by the active sensor and manipulation systems. The sensory-motor system is more general than the subsumption architecture of R. Brooks (1986) and can implement in a straightforward fashion alteration of priorities. >

Patent
10 Apr 1990
TL;DR: In this article, a picture is divided into the plural blocks by a task control part and while referring a task table storing information required for controlling the unit processors, a processing block and a processing task optimum to each unit processor module 11 are decided.
Abstract: PURPOSE: To utilize the throughput of a multiprocessor at a maximum by dividing a picture into plural blocks and sharing a processing task respectively equally for plural unit processor modules. CONSTITUTION: The picture is divided into the plural blocks by a task control part 7 and while referring a task table 8 storing information required for controlling the unit processors, a processing block and a processing task optimum to each unit processor module 11 are decided. Then, encoding is executed while sharing the processing task respectively equally for the plural unit processor modules 11. Plural shared memories 10 to store local decoding data or data under encoding and parameter are connected through plural memory buses, which are independently provided, to the respective plural unit processor modules 11 and the plural memory buses can be utilized for the access of the shared memory. Thus, the throughput of the multiprocessor can be utilized at a maximum. COPYRIGHT: (C)1991,JPO&Japio

Journal ArticleDOI
TL;DR: The correlations indicate that the knowledge processing technology, evolved from applied artificial intelligence research, is a fundamental technology for building intelligent systems to support various knowledge-intensive CIM tasks at their decision level.
Abstract: As technological tasks in CIM environments become more complicated, the level of intelligence required to automate and integrate these tasks also evolves with increasing complexity. This paper classifies CIM tasks and their required intelligence into facility, data and decision levels, and discusses the automation and integration of those knowledge-intensive CIM tasks at their decision level. Since decision-level tasks are often more abstract than those at the facility and data levels, a systematic approach is necessary to build research programs for the automation of these tasks. This paper will use the decision-level task of concurrent engineering as an example to explain the five-step approach that we have adapted to form our research programs in this evolving area of CIM research. These five steps are: (1) perform analysis of the task and its needed decision-level supports, (2) conceptualize these analysis results into a concise framework, (3) propose a software paradigm for the conceptual framework, (4) identify functional requirements from this paradigm to guide software implementations, and (5) correlate implementation results to identify a fundamental technology. More specifically, the analysis of concurrent engineering tasks in CIM can be found in Section 2. Section 3 explains the conceptualization process which views decision making activities as mappings and loops between a control and performance space. In Section 4, concurrent engineering is modeled as a team problem-solving process participated in by multiple cooperating knowledge sources (MCKS) with overlapping expertise to perform those loops. Several functional requirements are identified from this MCKS model of concurrent engineering and example research activities to address these challenges are described in Section 5. The correlations in Section 6 indicate that the knowledge processing technology, evolved from applied artificial intelligence research, is a fundamental technology for building intelligent systems to support various knowledge-intensive CIM tasks at their decision level.

Patent
27 Nov 1990
TL;DR: In this article, the authors propose to recover the system to a normal communication state without causing the omission of data, etc, even if a fault is generated in the course of transmitting/receiving the data by executing synchronously a check point restarting processing at the time of generation of a fault between devices.
Abstract: PURPOSE:To recover the system to a normal communication state without causing the omission of data, etc, even if a fault is generated in the course of transmitting/receiving the data by executing synchronously a check point restarting processing at the time of generation of a fault between devices CONSTITUTION:Data processors E1, E2 function as the node of this distributed processing system, respectively The processor E1 is constituted of a communication control part A1, a check point task executing part B1, and a restart task executing part C1, and also, a check point file D1 is connected as an external storage device The processor E2 is also constituted in the same way In this state, the state of the other device of communication is recognized by the check point task executing means, and an execution program corresponding to its state is saved as check point data In such a manner, between the processors being in the course of communication, the respective corresponding execution programs can be determined as check point data, and even if the fault is generated in the course of transmitting/receiving the data, the recovery can be executed without causing the omission of the data, etc

Proceedings ArticleDOI
01 Dec 1990
TL;DR: This paper extends earlier work on proof of system specifications to cover more general branching behaviors of individual tasks, including cases of timed task calls and timed rendezvous.
Abstract: ADA/TL is a language for specification of the behavior of systems of communicating tasks. It merges concepts of the specification part of ADA, VDM specification of packages, and temporal logic specification of task behavior. The TL part consists of constructive specification of behaviors of individual tasks and a system specification of the properties of the interaction of tasks. A proof of a system specification consists of showing that the system property holds over all possible interleavings of the task behaviors.This paper extends earlier work on proof of system specifications to cover more general branching behaviors of individual tasks, including cases of timed task calls and timed rendezvous. The constructive specification of each individual task defines a finite state computation model of its possible behaviors with allowed communication between task computations. The proof system uses marker symbols to represent the current state within each task computation, inference rules to justify transformations from one state to the next, and a proof tableau for representing the proof steps. The method rests upon the technique of using an invariant system property to identify a finite computation model of the interaction of all the system tasks. The proof tableau symbolically traces threads of control in all branches of the finite state model of the interaction of all of the system tasks. The proof method is illustrated herein using an example of a traffic walk-light controller with a timed behavior.

Journal ArticleDOI
TL;DR: The authors found that the job profile created with the scale data is highly correlated with the profile created from a much simpler “Do you perform this task?” checklist, and concluded that the relative time-spent scale has limited incremental utility beyond a dichotomous checklist.
Abstract: Recent studies have called for the abandonment of the relative-time-spent scale in task inventories. This recommendation is based on findings that the job profile created with the scale data is highly correlated with the profile created from a much simpler “Do you perform this task?” checklist. We examined this issue using 3 inventories and 42 jobs (N=2252). Profile correlations were computed on only the tasks actually performed by incumbents to avoid possibly inflated rs due to including irrelevant tasks. The specificity of task inventory items was proposed as an explanation for the high correlation between the two job profiles. Specificity of items was examined by looking at both the type (job duties versus tasks) and the amount (number of items in job profile and average number of items relevant to each job) of items used in the inventory. Correlations between time spent and checklist profiles were in the .80's and .90's regardless of the number of irrelevant tasks or the specificity of tasks. We agree with previous military research and conclude that the relative-time-spent scale has limited incremental utility beyond a dichotomous checklist.

Patent
07 May 1990
TL;DR: In this article, the authors propose a shared resource exclusion control function to improve the use efficiency of shared resources by permitting a fact that a lock request task outruns in a shared mode, only in the case the number of shared mode lock request tasks which outrun the task for waiting for a lock in an exclusion mode is less than the maximum outrunning task number.
Abstract: PURPOSE:To improve the use efficiency of shared resources by permitting a fact that a lock request task outruns in a shared mode, only in the case the number of shared mode lock request tasks which outrun the task for waiting for a lock in an exclusion mode is less than the maximum outrunning task number. CONSTITUTION:A shared resource exclusion control function 9 consists of a lock request processing part 6, an unlock request processing part 7 and an exclusion control table 8. In such a state, in the case resources are not locked against a lock request of a shared mode, the lock right is given to a task 10 which outputs a request, and in the case of the resources are locked in the shared mode, the lock right is given to the task 10 of the request unless there is a lock waiting task in an exclusion mode. Also, in the case there is the lock waiting task in the exclusion mode, and in any case of lock waiting, the number (the number of outrunning tasks) of shared mode lock request tasks which outrun the request sequence of the lock request in the exclusion mode is less than the maximum outrunning task number, the lock right is given to the task 10 which outputs the request. In such a way, by preventing generation of a permanent blockage, resources can be used efficiently.

Proceedings ArticleDOI
01 Dec 1990
TL;DR: It is found that implementing the entire Ada semantics for distributed systems is difficult, but that a pragmatic approach can give reasonable semantics.
Abstract: A system for distributing a single Ada program to a network of loosely coupled computers is described. The system is applied to the distribution of a missile control system. We have found that implementing the entire Ada semantics for distributed systems is difficult, but that a pragmatic approach can give reasonable semantics. Our approach is based on source code transformation of the single Ada program into multiple Ada programs. These programs are then compiled with a standard compiler. The units of distribution are tasks and task types, subprograms, and variables. Restrictions are implied by the task termination semantics, the exception propagation between nodes, and the synchronization of tasks at activation.

Patent
21 Sep 1990
TL;DR: Unit of Work object classes as mentioned in this paper allow concurrent processing through Unit of Work levels and instances while maintaining the integrity of the data in the database, and each new instance assigned to a task is an instance of the unit of work object class.
Abstract: A Unit of Work object class for an object oriented data­base management system provides concurrent processing through Unit of Work levels and instances while maintain­ing the integrity of the data in the database. Each new Unit of Work assigned to a task is an instance of the Unit of Work object class. A Unit of Work manager controls each step such that manipulation of the data occurs to the copies at that particular level for that particular in­stance. Only after all levels have been completed satis­factorily will a "Commit" occur to the data in the data­base. If completion is not satisfactory, Rollback of the levels occur, thus preserving data integrity. The Unit of Work manager can also switch control between Unit of Work instances, thus permitting simultaneous performance of tasks.

Proceedings ArticleDOI
02 Jan 1990
TL;DR: The authors describe the representation of an externally viewable state of an Ada task and define operators to specify a task behavior as a sequence of state conditions that lead directly to the design of task-rendezvous behavior.
Abstract: The authors report work on a language called Ada/TL for the specification of the temporal behavior of interacting Ada tasks in both concurrent and distributed systems. Ada/TL is an extension of the task-specification declarations required by Ada. The extensions include temporal assertions about rendezvous and other events of external interactions and nontemporal in and out assertions about parameters and other data items that flow between tasks. Linear-time operators are used to specify the sequential behavior of individual tasks and branching-time operators to specify global properties about the interaction of tasks. It is intended that task specifications follow the style of Ada declarations and be constructive inasmuch as they lead directly to the design of task-rendezvous behavior. The authors describe the representation of an externally viewable state of an Ada task and define operators to specify a task behavior as a sequence of state conditions. Specifications are illustrated for examples of tasks using shared resources and interaction using both synchronous and asynchronous communication. Specification of timing constraints and analysis of gloval correctness of specifications are discussed. >

Journal ArticleDOI
TL;DR: A simultaneous access priority queue design which handles p accesses every O(logp) time is presented and a design which can pipeline accesses in constant time is proposed, which achieves a significant performance improvement.

Patent
11 Jan 1990
TL;DR: In this paper, a MIMD-type computer device is formed from plural computers and the respective computers are formed from operation means 200 and management units 201, which are connected through the set of links 202, and transmit data with the other computers.
Abstract: PURPOSE: To improve operation speed by separating a management element from an operation means. CONSTITUTION: A computer device is formed from plural computers. The respective computers are formed from operation means 200 and management units 201. The operation means 200 is constituted of plural basic processors 100 and a control unit 504 and they are connected by buses 47. The control unit 504 is connected to the management unit 201 through a bus 204. The management unit 201 monitors the execution of the task of the operation means 200, generates the schedule and adjusts it. The management units 201 are connected through the set of links 202, and transmit data with the other computers. Thus, the MIMD-type computer device can be constituted.

Patent
13 Feb 1990
TL;DR: In this article, the authors propose to substantially invalidate the software copied illegally and also to improve the reliability of said invalidity processing by using a prescribed procedure of the process against the illegal copy of the software.
Abstract: PURPOSE:To substantially invalidate the software copied illegally and also to improve the reliability of said invalidity processing by using a prescribed procedure of the process against the illegal copy of the software. CONSTITUTION:A nucleus 11 writes the illegal data that can prevent a program action into a rewritable memory area which is referred to at execution of the program action, e.g., a prescribed area of a system work area SWE of a RAM 3. Then a pasonal identification number is received from outside and at the same time a key code set previously at an unlock task 17 is read out when the execution of the task 17 is started. At the same time, it is decided whether the input identification number is proper to the key code or not. If so, the written illegal data is rewritten into the normal data and the execution of the task 17 is through.

Proceedings ArticleDOI
06 Nov 1990
TL;DR: BaLinda (Biddle and Linda) Lisp is a parallel execution Lisp dialect designed to take advantage of the architectural capabilities of Biddle (bidirectional data driven Lisp engine) to provide good support for parallel execution.
Abstract: The authors describe BaLinda (Biddle and Linda) Lisp, a parallel execution Lisp dialect designed to take advantage of the architectural capabilities of Biddle (bidirectional data driven Lisp engine). The Future construct is used to initiate parallel execution threads, which may communicate through Linda-like commands operating on a tuple space. These features provide good support for parallel execution, and blend together well with notational consistency and simplicity. Unstructured task initiation and termination commands are avoided, while mandatory and speculative parallelisms (lazy versus eager executions) are both supported. >