scispace - formally typeset
Search or ask a question

Showing papers on "Task (computing) published in 1995"


Patent
08 Sep 1995
TL;DR: A player position determining and course management system for a golf course having a plurality of roving units for use by players in playing the course is disclosed in this paper, where each roving unit includes a central processing unit (CPU) including a data processor for executing various tasks ranging from fastest execution of a task to slowest execution on a schedule of priorities of task completion.
Abstract: A player position determining and course management system for a golf course having a plurality of roving units for use by players in playing the course is disclosed. Each roving unit includes a central processing unit (CPU) including a data processor for executing various tasks ranging from fastest execution of a task to slowest execution of a task on a schedule of priorities of task completion, a real-time means for controlling the processor to give the tasks priority ranging from fastest execution of a task with highest priority to slowest execution of a task with lowest priority, and a means for precisely timing functions of the system including modulating means utilizing a common digital modulation technique for digitally modulating data transmitted to and from all of the roving units. Each of the roving units include a monitor for displaying the golf course including each of the holes with its tee box, fairway, green, cup and hazards, as well as the position of the roving unit on the course in real time. Additionally, the system includes a course management base station for transmitting and receiving information to the roving units and a monitor for displaying the the location of each roving unit on the golf course in real time.

236 citations


Patent
Dale Boss1, David G. Hicks1
13 Jun 1995
TL;DR: In this article, the authors present a method and apparatus for task-based application sharing in a graphic user interface such as Windows® for Windows®, where a host user designates an application to be shared, referred to as a shared applications, and another user at a remote location shares control of the shared application.
Abstract: Method and apparatus for task based application sharing in a graphic user interface such as for Windows® are provided. A host user designates an application to be shared, referred to as a shared applications (14 in Fig. 2). Another user at a remote location, referred to as the client user, shares control of the shared application (16 in Fig. 2). The shared application runs and executes only on the host system. The client system has a rectangular area on the display screen within which all shared applications are displayed (Fig. 7). The client system renders an image of all windows of a shared application including pop-up dialogs and menus without also displaying unshared applications. Further, both the client and the host users continue to perform normal operations outside of the shared rectangular area, and the host user defines the tasks which are to be shared.

165 citations


Journal ArticleDOI
TL;DR: The authors show that adaptive parallelism has the potential to integrate heterogeneous platforms seamlessly into a unified computing resource and to permit more efficient sharing of traditional parallel processors than is possible with current systems.
Abstract: Desktop computers are idle much of the time. Ongoing trends make aggregate LAN "waste"-idle compute cycles-an increasingly attractive target for recycling. Piranha, a software implementation of adaptive parallelism, allows these waste cycles to be recaptured by putting them to work running parallel applications. Most parallel processing is static: programs execute on a fixed set of processors throughout a computation. Adaptive parallelism allows for dynamic processor sets which means that the number of processors working on a computation may vary, depending on availability. With adaptive parallelism, instead of parceling out jobs to idle workstations, a single job is distributed over many workstations. Adaptive parallelism is potentially valuable on dedicated multiprocessors as well, particularly on massively parallel processors. One key Piranha advantage is that task descriptors, not processes, are the basic movable, remappable computation unit. The task descriptor approach supports strong heterogeneity. A process image representing a task in mid computation can't be moved to a machine of a different type, but a task descriptor can be. Thus, a task begun on a Sun computer can be completed by an IBM machine. The authors show that adaptive parallelism has the potential to integrate heterogeneous platforms seamlessly into a unified computing resource and to permit more efficient sharing of traditional parallel processors than is possible with current systems. >

163 citations


Patent
07 Jun 1995
TL;DR: In this paper, the authors present a system that logs events which occur in the target software, and stores these in a buffer for periodic uploading to a host computer, which provides the ability to perform a logic analyzer function on real-time software.
Abstract: The present invention logs events which occur in the target software, and stores these in a buffer for periodic uploading to a host computer. Such events include the context switching of particular software tasks, and task status at such context switch times, along with events triggering such a context switch, or other events. The host computer reconstructs the real-time status of the target software from the limited event data uploaded to it. The status information is then displayed in a user-friendly manner. This provides the ability to perform a logic analyzer function on real-time software. A display having multiple rows, with one for each task or interrupt level, is provided. Along a time line, an indicator shows the status of each program, with icons indicating events and any change in status.

142 citations


Patent
20 Jul 1995
TL;DR: In this article, a process for creating, maintaining, and executing network applications is described, where a user specifies a network application as an interconnection (102) of tasks, each task being addressed to run on one or more computers (103-105).
Abstract: A process for creating, maintaining, and executing network applications. A user specifies a network application as an interconnection (102) of tasks (100), each task being addressed to run on one or more computers (103-105). Process steps install and execute the application with accommodation for dynamically changing addresses. During execution, process steps compile or interpret source code on remote computers or other devices (106) as needed. Process steps permit application changes during execution subject to limitations and fail-safes that prevent non-programmers from creating invalid changes.

110 citations


Proceedings ArticleDOI
02 Oct 1995
TL;DR: A new, efficient analysis algorithm to derive tight bounds on the execution time required for an application task executing on a distributed system, valid for a general problem model.
Abstract: Many embedded computing systems are distributed systems: communicating processes executing on several CPUs/ASICs connected by communication links. This paper describes a new, efficient analysis algorithm to derive tight bounds on the execution time required for an application task executing on a distributed system. Tight bounds are essential to cosynthesis algorithms. Our bounding algorithms are valid for a general problem model: the system can contain several tasks with different periods; each task is partitioned into a set of processes related by data dependencies; the periods and the computation times of processes are bounded but not necessarily constant. Experimental results show that our algorithm can find tight bounds in small amounts of CPU time.

102 citations


Patent
20 Nov 1995
TL;DR: A method and system of sequencing menu items of a menu-driven user/system interface includes a resequencing of the menu items in response to the frequency of selection and/or a shift in the primary responsibility of a user as mentioned in this paper.
Abstract: A method and system of sequencing menu items of a menu-driven user/system interface includes a resequencing of the menu items in response to the frequency of selection and/or a shift in the primary responsibility of a user. In one embodiment, an initial sequence of the serially presented menu items is stored. The selection of each menu item is counted. The menu items are then rearranged to provide a frequency-based order that presents the most often selected items before the less likely to be selected menu items. The step of resequencing the menu items may be limited to input of a resequencing command by a user or may be limited to predetermined time periods. As another optional feature, the "learning" that occurs by counting the selections can be downloaded from one system and uploaded to another system. In a second embodiment, the rearrangement is task-based. Depending upon the task assigned to the user or depending upon the mode of operation of the system, the menu items are arranged, again with the goal of presenting the most often selected menu items at the beginning of the progression.

100 citations


Patent
07 Nov 1995
TL;DR: In this article, a computer system for distributed processing, particularly of digital audio data, is described, which includes means for assigning to a particular processor a specific processing task or tasks, as well as a means for assign additional specific processing tasks to that same processor to maximize its use.
Abstract: A computer system for performing distributed processing, particularly of digital audio data, is disclosed. The system has a number of digital signal processors linked to a host computer through a time division multiplex bus. The system includes means for assigning to a particular processor a specific processing task or tasks, as well as a means for assigning additional specific processing tasks to that same processor to maximize its use. When the processor performing a specific processing task has reached its capacity, the system assigns a new processor to perform that task. To enhance the efficiency of the processor to perform the specific processing task, the processor cyclically runs a specific set of instructions for performing that specific processing task, and waits for the system to send it digital data to be processed.

98 citations


Patent
20 Apr 1995
TL;DR: In this paper, a system and method for parallel execution of logic programs on a computer network comprising two or more local-memory processors includes a logic program interpreter resident on all processors in the system.
Abstract: A system and method for parallel execution of logic programs on a computer network comprising two or more local-memory processors includes a logic program interpreter resident on all processors in the system. The interpreter commences execution of a logic program on one processor and, based on the results of its initial execution, generates lists of parallel-executable tasks and distributes them to other processors in the network to which it is coupled. Each processor which receives parallel tasks likewise commences execution, identification of parallel sub-tasks, and further distribution. When there are no parallel tasks at a processor or other processors available for further distributions, the task is executed sequentially and all execution results are returned to the processor which distributed the tasks executed.

97 citations


Journal ArticleDOI
TL;DR: If all the software bottlenecks can be removed, the performance limit will be due to a conventional hardware bottleneck, and the method for estimating the performance benefit to be obtained is given.
Abstract: Software bottlenecks are performance constraints caused by slow execution of a software task, in typical client-server systems a client task must wait in a blocked state for the server task to respond to its requests, so a saturated server will slow down all its clients. A rendezvous network generalizes this relationship to multiple layers of servers with send-and-wait interactions (rendezvous), a two-phase model of task behavior, and to a unified model for hardware and software contention. Software bottlenecks have different symptoms, different behavior when the system is altered, and a different cure from the conventional bottlenecks seen in queueing network models of computer systems, caused by hardware limits. The differences are due to the "push-back" effect of the rendezvous, which spreads the saturation of a server to its clients. The paper describes software bottlenecks by examples, gives a definition, shows how they can be located and alleviated, and gives a method for estimating the performance benefit to be obtained. Ultimately, if all the software bottlenecks can be removed, the performance limit will be due to a conventional hardware bottleneck. >

90 citations


Patent
27 Sep 1995
TL;DR: In this article, a queue-controller responds to a processor generated order to queue a first task for execution, by performing a method which includes the steps of: listing said first task on a first queue having an assigned priority that is equal to a priority of the first task; if a second task is listed on a queue having a higher assigned priority than said first queue, attempting execution of a first listed task in the first queue means.
Abstract: A procedure controls execution of priority ordered tasks in a multi-nodel data processing system. The data processing system includes a node with a software-controlled processor and a hardware-configured queue-controller. The queue-controller includes a plurality of priority-ordered queues, each queue listing tasks having an assigned priority equal to a priority order assigned to the queue. The queue-controller responds to a processor generated order to queue a first task for execution, by performing a method which includes the steps of: listing said first task on a first queue having an assigned priority that is equal to a priority of said first task; if a second task is listed on a queue having a higher assigned priority, attempting execution of the second task before execution of the first task; if no tasks are listed on a queue having a higher assigned priority than said first queue, attempting execution of a first listed task in the first queue means; and upon completion of execution of the task or a stalling of execution of the task, attempting execution of a further task on the first queue only if another order has not been issued to place a task on a queue having a higher assigned priority. The method further handles chained subtasks by attempting execution of each subtask of a task in response to the processor generated order; and if execution of any subtask does not complete, attempting execution of another task in lieu of a subtask chained to the subtask that did not complete.

Proceedings ArticleDOI
01 Aug 1995
TL;DR: This paper addresses the problem of optimizing throughput in task pipelines and presents two new solution algorithms based on dynamic programming and finds the optimal mapping of k tasks onto P processors in O(P4k2) time.
Abstract: Many applications in a variety of domains including digital signal processing, image processing and computer vision are composed of a sequence of tasks that act on a stream of input data sets in a pipelined manner. Recent research has established that these applications are best mapped to a massively parallel machine by dividing the tasks into modules and assigning a subset of the available processors to each module. This paper addresses the problem of optimally mapping such applications onto a massively parallel machine. We formulate the problem of optimizing throughput in task pipelines and present two new solution algorithms. The formulation uses a general and realistic model for inter-task communication, takes memory constraints into account, and addresses the entire problem of mapping which includes clustering tasks into modules, assignment of processors to modules, and possible replication of modules. The first algorithm is based on dynamic programming and finds the optimal mapping of k tasks onto P processors in O(P4k2) time. We also present a heuristic algorithm that is linear in the number of processors and establish with theoretical and practical results that the solutions obtained are optimal in practical situations. The entire framework is implemented as an automatic mapping tool for the Fx parallelizing compiler for High Performance Fortran. We present experimental results that demonstrate the importance of choosing a good mapping and show that the methods presented yield efficient mappings and predict optimal performance accurately.

Patent
Cheryl Senter1, Johannes Wang1
05 Jun 1995
TL;DR: In this paper, a load store unit is provided whose main purpose is to make load requests out of order whenever possible to get the load data back for use by an instruction execution unit as quickly as possible.
Abstract: The present invention provides a system and method for managing load and store operations necessary for reading from and writing to memory or I/O in a superscalar RISC architecture environment. To perform this task, a load store unit is provided whose main purpose is to make load requests out of order whenever possible to get the load data back for use by an instruction execution unit as quickly as possible. A load operation can only be performed out of order if there are no address collisions and no write pendings. An address collision occurs when a read is requested at a memory location where an older instruction will be writing. Write pending refers to the case where an older instruction requests a store operation, but the store address has not yet been calculated. The data cache unit returns 8 bytes of unaligned data. The load/store unit aligns this data properly before it is returned to the instruction execution unit. Thus, the three main tasks of the load store unit are: (1) handling out of order cache requests; (2) detecting address collision; and (3) alignment of data.

Patent
Yoshiaki Sudo1
14 Jul 1995
TL;DR: Load distribution is performed by expanding a distributed task operating in a heavily loaded node to a lightly loaded node, and compressing the distributed task from the heavily load node and transferring threads within the distributed tasks.
Abstract: In a load distribution method, the load in the entire distributed system is uniformly distributed. In a system in which a plurality of information processing apparatuses (nodes) are connected with a network, the degree of distribution of each distributed task is controlled by expanding or compressing the task. Load distribution is performed by expanding a distributed task operating in a heavily loaded node to a lightly loaded node, and compressing the distributed task from the heavily loaded node and transferring threads within the distributed task. Load distribution servers execute the load distribution method.

Patent
Carl Murray1
11 Jul 1995
TL;DR: In this paper, a method for generating a schedule for the performance of a process, wherein the process comprises a plurality of steps, each step comprises one or more operations, is presented.
Abstract: The invention provides in one aspect a method for generating a schedule for the performance of a process, wherein the process comprises a plurality of steps. Each step of the process comprises one or more operations. The method comprises first obtaining input containing information defining the process and then generating from the input a series of nodes definitions, each node definition corresponding to at least one step of the process. Each node definition includes duration and device usage information relating to at least one step and further includes information identifying at least one previous step and at least one subsequent step. Then, one or more tasks are generated using the series of node definitions, each task comprising at least one step of the process, such that every step in the process is associated with at least one task. Once the tasks are defined, a schedule of operations is generated using the generated tasks and the node definitions, said schedule of operations corresponding to the defined process. In a preferred embodiment, the method of the present invention employs a graphical user interface to obtain the input. Also, preferably, the graphical user interface consists of icons representative of operations that may be performed on the work sample.

Patent
Kazuo Nakamura1
18 Sep 1995
TL;DR: In this article, a micro-ROM assigns the task number specified by the first memory address part 22 to an index value, reads a register list of the task from a context save information specified by second memory access part 23, and selectively saves or restores the content of the register specified by register list.
Abstract: An instruction code includes a first memory address part 22 and a second memory address part 23, and when a microinstruction decoder 14 decodes an operation code of save or restore instruction of the instruction code prefetched by an instruction queue 13, a micro-ROM 15 assigns the task number specified by the first memory address part 22 to an index value, reads a register list of the task from a context save information specified by the second memory address part 23, and selectively saves or restores the content of the register specified by the register list. With this configuration, the central processing unit can save or restore the context with a small occupied memory quantity and at high speed.

Proceedings ArticleDOI
22 Jan 1995
TL;DR: This work introduces a new scheduling problem that is motivated by applications in the area of access and ∞ow control in high-speed and wireless networks and presents fair and ecient scheduling algorithms for the case where the con∞ict graph is an interval graph.
Abstract: We introduce a new scheduling problem that is motivated by applications in the area of access and ∞ow control in high-speed and wireless networks. An instance of the problem consists of a set of persistent tasks that have to be scheduled repeatedly. Each task has a demand to be scheduled \as often as possible." There is no explicit limit on the number of tasks that can be scheduled concurrently. However, such limits are imposed implicitly because some tasks may be in con∞ict and cannot be scheduled simultaneously. These con∞icts are presented in the form of a con∞ict graph. We dene parameters which quantify the fairness and regularity of a given schedule. We then proceed to show lower bounds on these parameters and present fair and ecient scheduling algorithms for the case where the con∞ict graph is an interval graph. Some of the results presented here extend to the case of perfect graphs and circular-arc graphs as well.

Patent
12 Sep 1995
TL;DR: In this article, a client environment process that requests an operation on an object is notified of a task server to which selected offload operations should be sent, and the client preferably stores the task server identifier and thereafter sends such operation request directly to the identified task server.
Abstract: Selected server operations that affect objects in a distributed computing system can be off-loaded from servers at which the objects are stored to other servers without the requirement of vertical partitioning of the affected objects and without off-loading entire affected objects. A client environment process that requests an operation on an object is notified of a task server to which selected off-load operations should be sent. The client preferably stores the task server identifier and thereafter sends such operation request directly to the identified task server. The object metadata information can be stored in the client environment, if desired. The object metadata at the owning repository server is maintained, if affected by the requested operation. A single task server can perform off-loaded functions from several other repository servers at the same node and at other nodes, and in that way reduce the workload of many servers. The functions that can be off-loaded include named pipe functions and byte range file lock operations.

Patent
03 May 1995
TL;DR: In this paper, the cell administrator's preexisting restriction on root user access to the machine at least with respect to the identified local configuration/unconfiguration task is enforced.
Abstract: Methods of configuring and unconfiguring resources in and from a distributed computing environment are effected by identifying those configuration/unconfiguration tasks which must be carried out locally (that is, on the resource being configured/unconfigured) and then enforcing the cell administrator's preexisting restriction on root user access to the machine at least with respect to the identified local configuration/unconfiguration task.

01 Dec 1995
TL;DR: An experiment was conducted investigating the effects of different parameters of VE in the performance of simple, representative tasks, and results raise questions on the claimed general gain in task performance through the increased reality of stereoscopic presentations and head-coupling.
Abstract: : The U.S. Army Research Institute for the Behavioral and Social Sciences has an ongoing program of investigation into the requirements for using Virtual Environments (VE) to train dismounted soldiers. As a part of this program, an experiment was conducted investigating the effects of different parameters of VE in the performance of simple, representative tasks. This report provides background information about VE display problems, head-coupling in VE, presence, field dependence, and simulator sickness. The tasks used in the experiment are generic to performance in VEs and would form the basis both of training programs and general soldier tasks. Visual presentation of the tasks was either through a Stereoscopic Head Mounted Display (HMD) or a Monoscopic HMD, and subjects could either move their Field of View (FOV) by moving their head (coupled) or could not move the FOV (uncoupled). The five tasks used were (1) movement through a sequence of rooms and doorways, (2) acquisition of a fixed target, (3) tracking a moving object, (4) manipulation of an object, and (5) distance estimation. In general, performance of all tasks improved over repeated trials. In the distance estimation task the estimations were relatively worse at shorter distances. However, the error was significantly lessened with stereoscopic presentations, and was also significantly improved when coupling was used, although these factors did not interact with one another. Performance in the other tasks was not significantly effected by presentation mode or head-coupling. The distance task errors and the lack of significant differences in performance of the other tasks raise questions on the claimed general gain in task performance through the increased reality of stereoscopic presentations and head-coupling.

Patent
09 Nov 1995
TL;DR: In this article, a stack-based processing of multiple real-time tasks operates on a net list of tasks which operate essentially simultaneously with system resources shared between tasks in a dynamic configuration.
Abstract: A system and method for stack-based processing of multiple real-time tasks operates on a net list of tasks which operate essentially simultaneously with system resources shared between tasks in a dynamic configuration. This system and method operate to control dispatching of messages which activate signal processing tasks, sequencing of processes activated by the messages and management of signal flow. Tasks are dynamically activated and deactivated in accordance with the specification by the net list by manipulating the task signals on the stack, thereby substantially reducing high-speed memory requirements of the system.

Book
01 Aug 1995
TL;DR: This book explains new Windows features and provides tips and techniques for simplified access to frequently used resources and the CD-ROM features hundreds of time-saving Windows 95 utilities and accessories.
Abstract: Geared to new and experienced Windows users, this book explains new Windows features and provides tips and techniques for simplified access to frequently used resources. Includes tutorial vignettes that are organized by type of task. The CD-ROM features hundreds of time-saving Windows 95 utilities and accessories.

Patent
23 May 1995
TL;DR: In this paper, a data processing system for executing multimedia applications which interface with multimedia devices that consume or produce at least one of real-time and asynchronous streamed data includes a CPU for execution of one or more multimedia applications and a DSP for processing data including streamed data.
Abstract: A data processing system for executing multimedia applications which interface with multimedia devices that consume or produce at least one of real-time and asynchronous streamed data includes a CPU for execution of one or more multimedia applications and a DSP for processing data including streamed data. A plurality of modular multimedia software tasks may be called by the multimedia application for execution in the DSP. A plurality of data communication modules are provided for linking selected ones of the software tasks with selected others of the software tasks, and linking selected multimedia devices with selected ones of the software tasks. Each of the communications modules allows continuous, real-time and unidirectional communication of streamed data. The modularity of the processing system defines an open architecture for real-time processing which allows additional modular multimedia tasks to be added to the software tasks and selectively linked to at least one of (a) selected ones of the software tasks, and (b) selected ones of the multimedia devices. A DSP manager is resident in the CPU which dynamically monitors resource allocation to allow at least one software task to be loaded and executed while at least one other software task is being executed by the DSP without interference with execution.

Book ChapterDOI
23 Oct 1995
TL;DR: The case-based reasoning system adaptive to cognitive tasks presented here is capable to adapt to analysis tasks as well as synthesis tasks, and its adaptability comes from its memory composition, both cases and concepts, and from its hierarchical memory organization.
Abstract: Case-based reasoning systems are generally devoted to the realization of a single cognitive task. The need for such systems to perform various cognitive tasks questions how to organize their memory to permit them to be task-adaptive. The case-based reasoning system adaptive to cognitive tasks presented here is capable to adapt to analysis tasks as well as synthesis tasks. Its adaptability comes from its memory composition, both cases and concepts, and from its hierarchical memory organization, based on multiple points of view, some of them associated to the various cognitive tasks it performs. For analytic tasks, the most specific cases are preferably used for the reasoning process. For synthesis tasks, the most specific concepts, learnt by conceptual clustering, are used. An example of this system abilities, in the domain of eating disorders in psychiatry, is briefly presented.

Book ChapterDOI
01 Jan 1995
TL;DR: These experiments measure the impact of the process architecture on connectionless and connection-oriented protocol stacks in a shared-memory multi-processor operating system and indicate that the choice of process architecture significantly affects communication subsystem performance.
Abstract: A communication subsystem consists of protocol functions and operating system mechanisms that support the implementation and execution of protocol stacks. To effectively parallelize a communication subsystem, careful consideration must be given to the process architecture used to structure multiple processing elements. A process architecture binds one or more processing elements with the protocol tasks and messages associated with protocol stacks in a communication subsystem. This paper outlines the two fundamental types of process architectures (task-based and message-based) and describes performance experiments conducted on three representative examples of these two types of process architectures — Layer Parallelism, which is a task-based process architecture, and Message-Parallelism and Connectional Parallelism, which are message-based process architectures. These experiments measure the impact of the process architecture on connectionless and connection-oriented protocol stacks (based upon UDP and TCP) in a shared-memory multi-processor operating system. The results from these experiments indicate that the choice of process architecture significantly affects communication subsystem performance.

Book ChapterDOI
01 Jan 1995
TL;DR: This paper examines how windows can be identified in a computer-aided fashion in TRIDENT using an Activity Chaining Graph (ACG), resulting from task analysis, and a complete methodological approach using these elements is provided.
Abstract: Multi-windowing capabilities of modern graphical workstations and personal computers provide fundamental features which are necessary to achieve a usable user interface (UI). This usability depends heavily on judicious application of these capabilities. Modifiable variables include reliance on task analysis, respect of guidelines, respect of ergonomic criteria. The problem of correct window identification is therefore crucial in this context. After reviewing the state of the art, this paper examines how windows can be identified in a computer-aided fashion in TRIDENT. The use of an Activity Chaining Graph (ACG), resulting from task analysis, is one of the basic assumptions of this project. Theoretical elements for formalizing UI presentation are defined with respect to this ACG. Then, a complete methodological approach using these elements is provided, consisting of: · an identification of presentation units for each interactive task leading to five types of window identification; · an identification of windows for each presentation unit: according to interaction styles and priorities, a particular type of window identification is retained and applied algorithmically to obtain windows of first rank; these windows become higher rank windows when they are aggregated by techniques to be applied if specific criteria are satisfied.

Patent
Kirk I. Hays1, Wayne D. Smith1
13 Dec 1995
TL;DR: An improved method and apparatus for utilizing Translation Lookaside Buffers (TLB) for maintaining page tables in a paging unit on a computer system is described in this article.
Abstract: An improved method and apparatus for utilizing Translation Lookaside Buffers (TLB) for maintaining page tables in a paging unit on a computer system. TLB contents for executing tasks are retained when the task is swapped out. The contents are then reloaded into the TLB when the task is again scheduled for execution. Spare memory cycles are utilized to transfer outgoing TLB data into memory, and incoming TLB data for a next scheduled task from memory.

Patent
06 Feb 1995
TL;DR: In this article, a multi-media user task (host) computer is interfaced to a high speed DSP which provides support functions to the host computer via an interprocessor DMA bus master and controller.
Abstract: A multi-media user task (host) computer is interfaced to a high speed DSP which provides support functions to the host computer via an interprocessor DMA bus master and controller. Support of multiple dynamic hard real-time signal processing task requirements are met by posting signal processor support task requests from the host processor through the interprocessor DMA controller to the signal processor and its operating system. The signal processor builds data transfer packet request execution lists in a partitioned queue in its own memory and executes internal signal processor tasks invoked by users at the host system by extracting signal sample data from incoming data packets presented by the interprocessor DMA controller in response to its execution of the DMA packet transfer request queues built by the signal processor in the partitioned queue. Processed signal values etc. are extracted from signal processor memory by the DMA interprocessor controller executing the partitioned packet request lists and delivered to the host processor. A very large number of packet transfers in support of numerous user tasks and implementing a very large number of DMA channels is thus made possible while avoiding the need for arbitration between the channels on the part of the signal processor or the host processor.

Patent
08 Jun 1995
TL;DR: In this paper, a multi-media user task (host) computer is interfaced to a high speed DSP which provides support functions to the host computer via an interprocessor DMA bus master and controller.
Abstract: A multi-media user task (host) computer is interfaced to a high speed DSP which provides support functions to the host computer via an interprocessor DMA bus master and controller. Support of multiple dynamic hard real-time signal processing task requirements are met by posting signal processor support task requests from the host processor through the interprocessor DMA controller to the signal processor and its operating system. The signal processor builds data transfer packet request execution lists in a partitioned queue in its own memory and executes internal signal processor tasks invoked by users at the host system by extracting signal sample data from incoming data packets presented by the interprocessor DMA controller in response to its execution of the DMA packet transfer request queues built by the signal processor in the partitioned queue. Processed signal values etc. are extracted from signal processor memory by the DMA interprocessor controller executing the partitioned packet request lists and delivered to the host processor. A very large number of packet transfers in support of numerous user tasks and implementing a very large number of DMA channels is thus made possible while avoiding the need for arbitration between the channels on the part of the signal processor or the host processor.

Patent
21 Nov 1995
TL;DR: In this paper, the authors proposed a multitask execution method and device for information processing by multi-task execution which are minimized in power consumption, where the total electric power of hardware resources that each task uses is detected and on the basis of the total power, the operation attributes of the respective tasks are determined.
Abstract: PURPOSE:To provide the method and device for information processing by multitask execution which are minimized in power consumption. CONSTITUTION:A CPU 1 executes a program in its main memory 3. The main memory 3 includes memories such as a DRAM and a ROM. A power source control part 4 can turn ON and OFF power sources supplied independently by units according to the instruction of the CPU 1. Further, the device is equipped with a display unit 31, an FDD unit 32, an HDD unit 33, a CD-ROM unit 34, a printer unit 35, and a communication unit 36. Through this constitution, the total electric power of hardware resources that respective tasks use is detected and on the basis of the total electric power, the operation attributes of the respective tasks are determined. Namely, the hardware resources and their power consumption that the tasks obtain are detected and the execution priority of a task which is large in the total electric power of the obtained hardware resource is set high; and the scheduler of the operating system selects the task with high execution priority and give the execution right thereto.