scispace - formally typeset
Search or ask a question

Showing papers on "Task (computing) published in 1993"


Patent
06 Aug 1993
TL;DR: In this paper, a method of using a computer system to model work processes that allows access by multiple users and dynamic modification of the work process model is presented, where a plurality of tasks are completed in the course of completing a work process and a set of expected courses of action performed in completing the task is defined.
Abstract: A method of using a computer system to model work processes that allows access by multiple users and dynamic modification the work process model. The method defines a plurality of tasks that are completed in the course of completing the work process. For each of the tasks, a set of expected courses of action performed in completing the task is defined. For each of the expected courses of action, one or more obligations that must be fulfilled upon the performance of such expected courses of action are associated with the expected courses of action. The method also included associating obligations with the activation of one or more of other tasks. The method tracks the activation and completion of the tasks. After the activation and completion of the tasks begins, tasks, expectations and obligations may be modified or added.

153 citations


Patent
Tim Wood1
30 Sep 1993
TL;DR: In this paper, a data backup system implements coordination between a Database Server and a Backup Server to produce a recoverable database dump by utilizing a technique referred to as stripe affinity, a mechanism is disclosed for ensuring the integrity of a database backup made to multiple archive devices simultaneously.
Abstract: A data backup system implements coordination between a Database Server and a Backup Server to produce a recoverable database dump. By utilizing a technique referred to as stripe affinity, a mechanism is disclosed for ensuring the integrity of a database backup made to multiple archive devices simultaneously. In addition, by utilizing stripe affinity, archived data may be reloaded from fewer archive devices than were used to make the original backup. A task scheduler mechanism allocates processor time among the tasks that comprise the backup system. In this way the I/O service tasks can process their event queues while the current set of allocation pages are also being processed.

128 citations


Patent
03 Nov 1993
TL;DR: In this paper, a server for remote procedure call processing is described, including a dispatcher, a plurality of worker tasks, and a dispatcher shared memory area and worker control block for each worker task.
Abstract: A server for executing operation calls by a client, including a dispatcher, a plurality of worker tasks, and a dispatcher shared memory area and worker control block for each worker task. Each operation call provided from a client is a sequence or one or more remote procedure call requests and each includes a packed buffer containing parameters. The dispatcher receives a buffer directly into the dispatcher shared memory space of the worker task selected to execute the remote procedure call request, sets the semaphore and sends a request acceptance response. The selected worker task unpacks the buffer into its memory space, executes the request, places the results into a packed buffer in its dispatcher shared memory area and sends a remote procedure call to the dispatcher. The dispatcher executes a remote procedure call to the client and sends the result buffer directly from the shared memory area. The server further includes a dispatcher state save mechanism and the remote procedure call from the worker task includes an identifier of the corresponding saved dispatcher state for the request. The packed buffer associated with a request includes client information and each worker task stores the client information in a worker shared memory space common to the worker tasks and each request of an operation call may be assigned to a different worker task.

103 citations


Journal Article
TL;DR: The basic transaction model has evolved over time to incorporate more complex transactions to take the advantage of semantics of higher level operations that cannot be seen at the level of page reads and writes.
Abstract: The basic transaction model has evolved over time to incorporate more complex transactions struc tures and to take the advantage of semantics of higher level operations that cannot be seen at the level of page reads and writes Well known examples of such extended transaction models include nested and multi level transactions A number of relaxed transaction models have been de ned in the last several years that permit a controlled relaxation of the transaction isolation and atomicity to better match the requirements of various database applications Correctness criteria other than global serializability have also been proposed Several examples of extended relaxed transaction models are reported in Recently transaction concepts have begun to be applied to support applications or activities that involve multiple tasks of possibly di erent types including but not limited to transactions and ex ecuted over di erent types of entities including but not limited to DBMSs The designer of such applications may specify inter task dependencies to de ne task coordination requirements and some times additional requirements for isolation and failure atomicity of the application We will refer to such applications as multi system transactional work ows While such work ows can be developed using ad hoc methods it is desirable that they maintain at least some of the safeguards of transactions related to the correctness of computations and data integrity Below we discuss brie y the speci cation and execution issues in this evolving eld with emphasis on the role of database transaction concepts The idea of a work ow can be traced to Job Control Languages JCL of batch operating systems that allowed the user to specify a job as a collection of steps Each step was an invocation of a program and the steps were executed as a sequence Some steps could be executed conditionally This simple idea was subsequently expanded in many products and research prototypes by allowing structuring of the activity and providing control for concurrency and commitment The extensions allow the designer of a multitask activity to specify the data and control ow among tasks and to selectively choose transactional characteristics of the activity based on its semantics The work in this area has been in uenced by the concept of long running activities Work ows discussed in this paper may be long running or not Other related terms used in the database literature are task ow multitransaction activities multi system applications application mul tiactivities and networked applications Some related issues are also addressed in various relaxed transaction models A fundamental problem with many extended and relaxed transaction models is that they provide a prede ned set of properties that may or may be not required by the semantics of a particular activity Another problem with adopting these models for designing and implementing work ows is that the systems involved in the processing of a work ow may not provide support for facilities implied by an extended relaxed transaction model Furthermore the extended and relaxed transaction models are mainly geared towards processing entities that are DBMSs that provide transaction management features often assumed to be of a particular restrictive type with the focus on preserving data consistency and not on coordinating independent tasks on di erent entities including legacy systems

101 citations


Patent
12 Apr 1993
TL;DR: In this paper, a pre-assignment and pre-scheduling of tasks that enables allocation across multiple physical processors arranged in a variety of architectures is employed for preassignment.
Abstract: A method is employed for pre-assignment and pre-scheduling of tasks that enables allocation across multiple physical processors arranged in a variety of architectures. The method comprises the steps of: constructing a DFG of tasks to be performed to provide a solution for a problem; determining cost values for each task and the overall problem, such cost values taking into account a target multiprocessor architecture and factors such as elapsed task execution times. The method pre-assigns the tasks to logical processors and assures that inter-dependent tasks are executable by logical processors that are within required communications delay criteria of each other. The assigning action attempts to arrive at a minimal cost value for all tasks comprising the problem. The pre-assigned tasks are then pre-scheduled based upon a performance criteria and are converted to machine code. The machine code is then deployed to physical processors in the target multi-processor architecture. The deploying action maps the logical processors' pre-assigned programs (comprising assigned tasks) onto physical processors, using data regarding the multi-processor architecture and the current utilization of the physical processors in the architecture, all while assuring that inter-dependent tasks are mapped so as to fulfill interprocessor communication delay criteria.

99 citations


Patent
10 Dec 1993
TL;DR: In this article, a multitasking controller comprising task storage means (2) for storing up to N tasks (P0,P1,P2,P3) each comprising a sequence of instructions, a microprocessor for processing, by time-sharing, a plurality of such N tasks, and a random access memory for storing variable data created and used by said microprocessor.
Abstract: A multitasking controller comprising task storage means (2) for storing up to N tasks (P0,P1,P2,P3) each comprising a sequence of instructions, a microprocessor for processing, by time-sharing, a plurality of such N tasks, and a random access memory (12) for storing variable data created and used by said microprocessor. The microprocessor further comprises a scheduler (7) realized in hardware for controlling the use of said microprocess or by such processes, and program counter storage means for storing N program counters (Pc0,Pc1,Pc2,Pc3) each for use by the scheduler (7) is able select a different one of the program counters (Pc0,Pc1,Pc2,Pc3) when the task processed by the microprocessor is changed without the transfer of data from the random access memory (12). FIG. 1

80 citations


Proceedings ArticleDOI
18 Sep 1993
TL;DR: The construction of the virtual work space for pick-and-place tasks with a new 3D interface device named SPIDAR II is discussed, and the results indicate that the appropriate forces are important for the pick- and-place task.
Abstract: The construction of the virtual work space for pick-and-place tasks with a new 3D interface device named SPIDAR II is discussed. The device can measure the motions of the thumb and the forefinger, and can provide the force sensations to the thumb and the forefinger. The operator can manipulate the virtual objects directly in the virtual work space using the device. The pick-and-place tasks are performed in the virtual space. The effects of the force sensations which are provided by the device are estimated. The results indicate that the appropriate forces are important for the pick-and-place task. The virtual block gives the best performance of pick-and-place tasks in virtual work space. >

61 citations


Patent
26 Oct 1993
TL;DR: In this article, a multi-media user task (host) computer is interfaced to a high speed DSP which provides support functions to the host computer via an interprocessor DMA bus master and controller.
Abstract: A multi-media user task (host) computer is interfaced to a high speed DSP which provides support functions to the host computer via an interprocessor DMA bus master and controller. Support of multiple dynamic hard real-time signal processing task requirements are met by posting signal processor support task requests from the host processor through the interprocessor DMA controller to the signal processor and its operating system. The signal processor builds data transfer packet request execution lists in a partitioned queue in its own memory and executes internal signal processor tasks invoked by users at the host system by extracting signal sample data from incoming data packets presented by the interprocessor DMA controller in response to its execution of the DMA packet transfer request queues built by the signal processor in the partitioned queue. Processed signal values etc. are extracted from signal processor memory by the DMA interprocessor controller executing the partitioned packet request lists and delivered to the host processor. A very large number of packet transfers in support of numerous user tasks and implementing a very large number of DMA channels is thus made possible while avoiding the need for arbitration between the channels on the part of the signal processor or the host processor.

61 citations


Proceedings ArticleDOI
Keith D. Swenson1
01 Dec 1993
TL;DR: A model for collaborative work process and a graphical language to support this model is presented, which allows for informal flow of communications and flexible access to information along with a formal flow of responsibility.
Abstract: A model for collaborative work process and a graphical language to support this model is presented. The model allows for informal flow of communications and flexible access to information along with a formal flow of responsibility. Work is decomposed into a network of task assignments (actually requests for those tasks), which may be recursively decomposed to finer grained tasks. The model includes consideration for authority and responsibility. Process flow can be dynamically modified. Policies (templates for a process) may be tailored to provide versions of a process customized for different individuals. The visual language is designed to ease the creation of policies and modification of ongoing processes, as well as to display the status of an active process.

56 citations


Proceedings ArticleDOI
01 Dec 1993
TL;DR: It is proposed that the task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a nondedicated distributed system.
Abstract: The authors address the feasibility of a nondedicated parallel processing environment, assuming workstation processes have preemptive priority over parallel tasks They develop an analytical model to predict parallel job response times The model provides insight into how significantly the workstation owner interference degrades parallel program performance A new term, task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced It is proposed that the task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a nondedicated distributed system

48 citations


Journal ArticleDOI
TL;DR: It is proved that all three criteria cannot be simultaneously satisfied by any or-parallel execution model based on a finite number of processors but unbounded memory.
Abstract: We discuss fundamental limitations of or-parallel execution models of nondeterministic programming languages. Or-parallelism corresponds to the execution of different nondeterministic computational paths in parallel. A natural way to represent the state of (parallel) execution of a nondeterministic program is by means of an or-parallel tree. We identify three important criteria that underlie the design of or-parallel implementations based on the or-parallel tree: constant-time access to variables, constant-time task creation, and constant-time task switching, where the term constant-time means that the time for these operations is independent of the number of nodes in the or-parallel tree, as well as the size of each node. We prove that all three criteria cannot be simultaneously satisfied by any or-parallel execution model based on a finite number of processors but unbounded memory. We discuss in detail the application of our result to the class of logic programming languages and show how our result can serve as a useful way to categorize the various or-parallel methods proposed in this field. We also discuss the suitability of different or-parallel implemenation strategies for different parallel architectures.

Patent
Dennis L. Venable1
15 Jun 1993
TL;DR: In this paper, a control system for pipelined image processing emulates a multi-tasking environment using a single tasking application, where a number of predefined image processing tasks are provided in a library.
Abstract: A control system for pipelined image processing emulates a multi-tasking environment using a single tasking application. A number of predefined image processing tasks are provided in a library. When a host application needs a processed image from an image source, the host application creates a pipeline of initialized instantiations of one or more of the tasks from the library. When the host application invokes the pipeline, the first data request for the heater of the image travels upstream in a first channel. The processed image header is returned down the first channel. Then a data request for scanlines of image data is sent upstream in a second data channel. The data request ripples upstreamwardly to the upstream-most instantiation of one of the tasks from the task library. The upstream-most instantiation of a task obtains a scan line from an image data source and returns it downstreamwardly to the host application in the second channel. Each instantiation of a task from the task library further operates on the image data. Once all of the scanlines have been processed, the memory allocations and data structures created during initialization are released to free up that memory.

Patent
21 Jan 1993
TL;DR: In this article, a plurality of queues where each queue is defined by a set of criteria, the queue system comprises a plurality-of header registers where each header register defines a queue in the queuing system and a plurality where each task register can be associated with each queue in queue system.
Abstract: A plurality of queues where each queue is defined by a set of criteria, the queue system comprises a plurality of header registers where each header register defines a queue in the queue system and a plurality of task registers where each task register can be associated with each queue in the queue system. Each header register has a unique address and contains a previous field and a next field. Each task register has a unique address and contains a previous field and a next field. Each previous field and said next field stores the address of another register in a given queue such that each queue is formed in a double link structure. Control means is provided for dynamically assigning task registers to queues by controlling the addresses stored in the previous and next fields in each header and task registers such that each of said task registers is always assigned to a queue in the queue system.

Patent
22 Dec 1993
TL;DR: In this paper, the authors propose a priority routine for determining which tasks should be performed in each of the time slots, and if a task is not completely performed during the time slot to which the task was allocated, the current status of the task is saved to memory so that the task can be completed during a subsequent time slot.
Abstract: A controller of the type used in process control includes a plurality of modular I/O units. The I/O units includes I/O circuits which may be of four basic types: digital input circuits, digital output circuit, analog input circuits and analog output circuits. The controller is microprocessor-controlled and has an operating system that controls the performance of a number of tasks relating to the control of a plurality of I/O devices to which the controller is connected. Each of the tasks is allocated to one of a plurality of successive time slots, and each task is performed during its associated time slot. The controller includes a priority routine for determining which of the tasks should be performed in each of the time slots. If a task is not completely performed during the time slot to which the task is allocated, the current status of the task is saved to memory so that the task can be completed during a subsequent time slot. If a task is completely performed during its allocated time slot, then one or more unfinished tasks can be performed during that time slot.

01 Jan 1993
TL;DR: A new implementation technique for LTC that allows full caching of the stack: the message-passing (MP) protocol, which is based on lazy task creation (LTC), a dynamic task partitioning mechanism that dramatically reduces the cost of task creation and consequently makes it possible to exploit fine grain parallelism.
Abstract: This thesis describes a high-performance implementation technique for Multilisp's "future" parallelism construct. This method addresses the non-uniform memory access (NUMA) problem inherent in large scale shared-memory multiprocessors. The technique is based on lazy task creation (LTC), a dynamic task partitioning mechanism that dramatically reduces the cost of task creation and consequently makes it possible to exploit fine grain parallelism. In LTC, idle processors get work to do by "stealing" tasks from other processors. A previously proposed implementation of LTC is the shared-memory (SM) protocol. The main disadvantage of the SM protocol is that it requires the stack to be cached suboptimally on cache-incoherent machines. This thesis proposes a new implementation technique for LTC that allows full caching of the stack: the message-passing (MP) protocol. Idle processors ask for work by sending "work request" messages to other processors. After receiving such a message a processor checks its private stack and task queue and sends back a task if one is available. The message passing protocol has the added benefits of a lower task creation cost and simpler algorithms. Extensive experiments evaluate the performance of both protocols on large shared-memory multiprocessors: a 90 processor GP1000 and a 32 processor TC2000. The results show that the MP protocol is consistently better than the SM protocol. The difference in performance is as high as a factor of two when a cache is available and a factor of 1.2 when a cache is not available. In addition, the thesis shows that the semantics of the Multilisp language does not have to be impoverished to attain good performance. The laziness of LTC can be exploited to support at virtually no cost several programming features including: the Katz-Weise continuation semantics with legitimacy, dynamic scoping, and fairness.

DOI
19 Nov 1993
TL;DR: In this article, the feasibility of a non-distributed parallel processing environment, assuming workstation processes have preemptive priority over parallel tasks, is investigated and an analytical model is developed to predict parallel job response times.
Abstract: The authors address the feasibility of a nondedicated parallel processing environment, assuming workstation processes have preemptive priority over parallel tasks. They develop an analytical model to predict parallel job response times. The model provides insight into how significantly the workstation owner interference degrades parallel program performance. A new term, task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It is proposed that the task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a nondedicated distributed system.

Journal ArticleDOI
TL;DR: Four components for achieving an approximate processing blackboard system that allows predictable knowledge source execution, multiple execution channels allow dynamic control over the computation involved in each task, and a meta-controller that monitors and modifies tasks during execution so that deadlines are met are described.
Abstract: Approximate processing is an approach to real-time AI problem solving in domains in which compromise is possible between the resources required to generate a solution and the quality of that solution. It is a satisficing approach in which the goal is to produce acceptable solutions within the available time and computational resource constraints. Previous work has shown how to integrate approximate processing knowledge sources within the blackboard architecture. However, in order to solve real-time problems with hard deadlines using a blackboard system, we need to have: (1) a predictable blackboard execution loop, (2) a representation of the set of current and future tasks and their estimated durations, and (3) a model of how to modify those tasks when their deadlines are projected to be missed, and how the modifications will affect the task durations and results. This paper describes four components for achieving this goal in an approximate processing blackboard system. A parameterized low-level control loop allows predictable knowledge source execution, multiple execution channels allow dynamic control over the computation involved in each task, a meta-controller allows a representation of the set of current and future tasks and their estimated durations and results, and a real-time blackboard scheduler monitors and modifies tasks during execution so that deadlines are met. An example is given that illustrates how these components work together to construct a satisficing solution to a time-constrained problem in the Distributed Vehicle Monitoring Testbed (DVMT).

02 Jan 1993
TL;DR: This dissertation proposes, develops, and demonstrates a computational system that performs more office tasks than previously possible and provides powerful tools for communication and coordination.
Abstract: This dissertation proposes, develops, and demonstrates a computational system that performs more office tasks than previously possible. This system is based on highly expressive formal languages and speech act theory. Because of the complexity of communication, this system divides office tasks into two types--those that involve communication (the saying tasks) and those that do not (the doing tasks). This general-purpose system performs, monitors, and coordinates tasks that are distributed over time, place, people, and applications. This system is based on observations about the nature of work and effective and efficient workers. The architecture of the system involves four main parts: (1) a formal language for communication, (2) a system for managing messages in this formal language, (3) a formal language for describing tasks, and (4) a system that performs, coordinates, and monitors these tasks. The formal language for communication, previously defined by Kimbrough and associates, is described and slightly modified. A new theory of formal language communication is derived from an existing theory of natural language communication. The requirements for a system of managing messages are described and investigated. A declarative, formal language is defined for describing the tasks the system can perform. Finally, a system that interprets these task descriptions is described. This system not only executes tasks but monitors their success and coordinates the people and applications that perform them. The fully integrated system provides powerful tools for communication and coordination. Several prototype applications demonstrate the usefulness of the architecture and the underlying communication theory. Both systems are shown to be implementable and plausibly useful for businesses to integrate into their information systems. The communication theory is shown to be useful though many steps in the process must be further investigated.

Patent
Hiroshi Tsukahara1
09 Dec 1993
TL;DR: In this paper, a client server system to which a plurality of client machine devices are coupled is described, where each client machine device executes a database data generation task and database data processing task which are a pluralityof functional tasks each corresponding to a different function.
Abstract: A client server system to which a plurality of client machine devices are coupled. Each client machine device executes a database data generation task and database data processing task which are a plurality of functional tasks each corresponding to a different function. A master task and common memory for controlling each database data generation task and database data processing task is disposed in each client machine device to allow a function originally handled by a client machine device in a work load concentration state or abnormal state to be handled by another client machine device.

Patent
30 Aug 1993
TL;DR: In this paper, a method and system for providing a user interface in a data processing system to be utilized for performing a plurality of tasks on a pluralityof diverse central processing complexes is presented.
Abstract: A method and system for providing a user interface in a data processing system to be utilized for performing a plurality of tasks on a plurality of diverse central processing complexes, wherein processes utilized to perform the plurality of tasks are transparent to a user, and wherein the user interface utilized to perform the plurality of tasks is common across diverse central processor complexes. A library containing interface parameters for each central processing complex is established. The interface parameters include information necessary to tailor the user interface for the specific target central processing complex, as well as processes for performing selected tasks within each of the central processing complexes. The user is prompted to select a task for at least one of the diverse central processing complexes. At least one interface parameter from the library of interface parameters is selected in response to the user selecting a task for at least one of the diverse central processing complexes. The selected task is then performed, utilizing the interface parameter or parameters to execute processes for performing the selected task within a selected one of the diverse central processing complexes. As a result, the user interface allows the user to transparently execute processes utilized to perform a task on a central processing complex.

Book ChapterDOI
25 Aug 1993
TL;DR: This paper proposes an approach to the interleaving of execution and planning for dynamic tasks by groups of multiple agents, where agents are dynamically assigned individual tasks that together achieve some dynamically changing global goal.
Abstract: The subject of multi-agent planning has been of continuing concern in Distributed Artificial Intelligence (DAI). In this paper, we suggest an approach to the interleaving of execution and planning for dynamic tasks by groups of multiple agents. Agents are dynamically assigned individual tasks that together achieve some dynamically changing global goal. Each agent solves (constructs the plan for) its individual task, then the local plans are merged to determine the next activity step of the entire group in its attempt to accomplish the global goal. Individual tasks may be changed during execution (due to changes in the global goal).

01 Jul 1993
TL;DR: The claim is that -- with DBL -- a Programming by Demonstration system can synthesize new functions faster, that it can synthesized more complex functions than without DBL, and that the resulting functions will meet the user''s intentions better (because he or she took part in their derivation).
Abstract: Many users of workstation and PC tools often have to perform the same task again and again. For example, a secretary might have to send out a dozen email messages until she finds a free meeting room. Or someone preparing business charts has to draw many special tables with shadowing bars around. Unfortunately, today''s macro facilities of such tools do not support the end user enough in constructing the required automation functions. In this report we propose a mechanism, called dialog-based learning (DBL), that shall provide the user of software tools exactly with a mechanism to teach new functions or to give hints or additional information to a program on how to perform a task better. Two applications will be considered: The first one is our experimental system RAP, a room reservation apprentice that will eventually overtake a secretary''s task to search for a free meeting or lecture room. RAP analyzes the outgoing and incoming email and constructs a finite state machine that can repeat the task of asking all room administrators until a free room is found. The key of RAP''s learning is to ask the user for unknown message types (e.g., request, positive answer, etc.) and keyphrases (e.g., "need a room") and to collect them in a thesaurus. Our second application is a demonstrational graphics editor that allows the user to teach it new functions by giving a few examples. The graphics editor will sometimes ask the user for explanations by showing him a list of its geometrical hypotheses for a certain situation, e.g., "line l1 was doubled" or line "l1 touches line l2 in the middle" (the second hypothesis serves as a strong indication that "touching" is the intended property). By clicking at one of them the use tells the graphics editor his intention and helps it to construct a function and a menu item for a new function. Involving the user through dialogs is an easy way to support the heuristics for function induction from examples when the hypothesis space grows. Our claim is that -- with DBL -- a Programming by Demonstration system can synthesize new functions faster, that it can synthesize more complex functions than without DBL, and that the resulting functions will meet the user''s intentions better (because he or she took part in their derivation). Therefore one can see DBL as a kind of simple inference mechanism, but with a powerful outcome.

Patent
10 Dec 1993
TL;DR: In this article, a multitasking controller comprising task storage means (2) for storing up to N tasks (P0, P1, P2, P3) each comprising a sequence of instructions, a microprocessor for processing, by time-sharing, a plurality of said N tasks, and data storage means(12), for storing variable data created and used by said microprocessor.
Abstract: The invention concerns a multitasking controller comprising task storage means (2) for storing up to N tasks (P0, P1, P2, P3) each comprising a sequence of instructions, a microprocessor for processing, by time-sharing, a plurality of said N tasks, and data storage means (12), for storing variable data created and used by said microprocessor. The microprocessor further comprises a scheduler (7) realised in hardware for controlling the use of said microprocessor by said processes, and program counter storage means for storing N program counters (Pc0, Pc1, Pc2, Pc3) each for use by said scheduler (7) to control the instruction sequence of a separate one of said N processes, so that said scheduler (7) is able to select a different one of said program counters (Pc0, Pc1, Pc2, Pc3) when the task processed by said microprocessor is changed without requiring the transfer of data from said data storage means (12).

Proceedings ArticleDOI
01 Nov 1993
TL;DR: A heuristic algorithm for task allocation for any distributed computing system where the subsystems are connected in the form of a local area network and communicate by means of broadcasting is presented, based on minimizing communication cost and balancing the load among its subsystems.
Abstract: Most performance lapses in distributed computing systems can be traced to the lack of a good task allocation strategy for distributed software. Random assignment of tasks or modules onto processors or subsystems can substantially degrade the performance of the entire distribution system. In this paper a heuristic algorithm for task allocation for any distributed computing system where the subsystems are connected in the form of a local area network and communicate by means of broadcasting is presented. This algorithm is based on minimizing communication cost and balancing the load among its subsystems. An example to illustrate our algorithm is also given. >

Patent
23 Nov 1993
TL;DR: In this article, an automated memory cartridge system prioritizes requests for tape retrieval by assigning a priority to each request according to the importance of the request and organizing them so that higher priority requests will be executed ahead of other lower priority requests.
Abstract: An automated memory cartridge system prioritizes requests for tape retrieval. The requests to transfer cartridges may be assigned a priority relating to the importance of the request. The system will recognize these requests and organize them so that higher priority requests will be executed ahead of other lower priority requests. This prioritization will be accomplished in such a manner which allows even very low priority requests to eventually be carried out.

Proceedings ArticleDOI
16 Aug 1993
TL;DR: Creating highly optimized code for parallel machines is a difficult task, as the efficiency of the transformed the code depends of the types of tranformations applied.
Abstract: Creating highly optimized code for parallel machines is a difficult task, as the efficiency of the transformed the code depends of the types of tranformations applied.

Proceedings ArticleDOI
26 Jul 1993
TL;DR: The shared control system was developed for the Self-Mobile Space Manipulator to handle a range of tasks associated with locomotion, manipulation, and material transportation on Space Station Freedom.
Abstract: A shared control system is a modular real-time system which is designed to execute complex tasks through the intelligent coordination of task modules. A state machine is used to control task sequencing and, due to the automatic switching, the accuracy and reliability with which tasks are executed is greatly improved. Tasks consist of sets of independent, modular and reusable subtasks whose outputs are combined to create the robot control. This system has proved itself useful for rapid development of reliable high-level, multiple sensor-based manipulation and control tasks. Additionally, an extensible neural network-based visual servoing system, semi-compliant Cartesian trajectory-following heuristics, and a real-time graphical user interface have been developed. The shared control system was developed for the Self-Mobile Space Manipulator to handle a range of tasks associated with locomotion, manipulation, and material transportation on Space Station Freedom.

Patent
30 Apr 1993
TL;DR: In this article, the authors propose a data processing system without deteriorating high-speed responsiveness even when the function of the system is more improved by providing independently first and second execution control data management means on the first-and second-execution control means.
Abstract: PURPOSE:To provide the data processing system without deteriorating high-speed responsiveness even when the function of the system is more improved by providing independently first and second execution control data management means on the first and second execution control means. CONSTITUTION:The peripheral system of an OS(operating system) 15 operates as an idle task 4 under the management of the OS 1. The execution is shifted to the OS 15 when tasks 7 to 9 under the management of the OS 1 are finished, and the tasks 10 to 13 managed by the OS 15 are executed. When tasks 10 to 13 issue a system call, the OS 15 executes the processing. When a system call is generated, the context of the task under execution is saved in a database 6 for task management. The task scheduling is performed by the OS 15, and the data of the database 6 is operated. The context of a newly scheduled task is restored in a register group of hardware for task execution from the database 6.

Patent
14 Dec 1993
TL;DR: In this paper, the authors present a method and apparatus for tagging a service request and the responses to the service request in a pipeline program running on a task in a multitasking computer system.
Abstract: Method and apparatus for tagging a service request and the responses to a service request in a pipeline program running on a task in a multitasking computer system. Each service request made to a service supplier from a pipeline stage is tagged with a unique identifier string that is automatically returned to the pipeline with the response to the service request. A time manager stage monitors the unique identifiers appended to responses directed into the pipeline and uses the identifiers to correlate each response to a specific, previously sent request. The time manager then directs the responses to the appropriate destination stage. The time manager also discards responses that are no longer needed either because the appropriate destination stage or pipeline is no longer active, or because a user specified time-out interval has elapsed.

01 Jan 1993
TL;DR: The notion of time interval, defined by a start and an end event, and denoting the series of its occurrences, is introduced, Associating a time interval to a data-flow process specifies a task i.e., a non-instantaneous activity and its execution interval.
Abstract: The SIGNAL language is a real-time, synchronized data-flow language. Is model of time is based on instants, and its actions are considered instantaneous. Various application domains such as signal processing and robotics require the possibility of specifying behaviors composed of successions of different modes of interaction with their environment. To this purpose, we introduce the notion of time interval, defined by a start and an end event, and denoting the series of its occurrences. Associating a time interval to a data-flow process specifies a task i.e., a non-instantaneous activity and its execution interval. Different ways of sequencing such tasks are described. We propose these basic elements at the programming language level, in the perspective of extensions to SIGNAL. Application domains feature the discrte sequencing of continuous, data-flow tasks, as is the case, for example, of robotic tasks.