scispace - formally typeset
Search or ask a question

Showing papers on "Task (computing) published in 1996"


Patent
24 May 1996
TL;DR: In this article, the authors present a system for assessing the performance of a server application that acquires performance information from the perspective of a simulated user and has significantly reduced hardware requirements.
Abstract: Method and system for assessing the performance of a server application that acquires performance information from the perspective of a simulated user and has significantly reduced hardware requirements. Particularly, actual user behavior is modeled so that accurate determinations can be made as to the number of users a given server application can adequately support. User behavior is modeled in a client profile that contains user parameters corresponding to the nature, timing, and frequency of user activities in operating a client that in turn corresponds to client tasks. A plurality of processes and process threads are initiated to contact a server as a plurality of simulated clients from a single client computer, each simulated client making a separate logical connection to the server. A task scheduler will schedule the simulated client tasks that are determined for each simulated user by reference to the user parameters in the client profile throughout a work day. The scheduler also introduces a random element so that the tasks simulate natural variability in user behavior. User receivable response times for the task corresponding to simulated user activity are maintained in a log file and the 95th percentile time or score for each task type is calculated. The individual task type scores may be weighted and averaged together to arrive at a weighted average response time. The weighted average response time can then be used as a threshold value to determine the total number of users a server application can adequately support.

199 citations


Patent
02 Jul 1996
TL;DR: In this paper, a lock manager decomposes the single spin lock traditionally employed to protect shared, global Lock Manager structures into multiple spin locks, each protecting individual hash buckets or groups of hash buckets which index into particular members of those structures.
Abstract: Database system and methods are described for improving scalability of multi-user database systems by improving management of locks used in the system. The system provides multiple server engines, with each engine having a Parallel Lock Manager. More particularly, the Lock Manager decomposes the single spin lock traditionally employed to protect shared, global Lock Manager structures into multiple spin locks, each protecting individual hash buckets or groups of hash buckets which index into particular members of those structures. In this manner, contention for shared, global Lock Manager data structures is reduced, thereby improving the system's scalability. Further, improved "deadlock" searching methodology is provided. Specifically, the system provides a "deferred" mode of deadlock detection. Here, a task simply goes to sleep on a lock; it does not initiate a deadlock search. At a later point in time, the task is awakened to carry out the deadlock search. Often, however, a task can be awakened with the requested lock being granted. In this manner, the "deferred" mode of deadlock detection allows the system to avoid deadlock detection for locks which are soon granted.

169 citations


Patent
Yoshiei Endo1
13 Feb 1996
TL;DR: In this paper, a work flow system is described in which, when a manager server unit of a WF system divides a work effort into a plurality of tasks and executes the tasks and allocates tasks to client units of task execution persons fitted for the tasks.
Abstract: A work flow system is disclosed in which, when a manager server unit of a work flow system divides a work effort into a plurality of tasks and execute the tasks, the manager server unit allocates tasks to client units of task execution persons fitted for the tasks. The work flow system includes a personal data file that stores personal data of each task execution person and an ability to perform each executable task, and a function for allocating tasks to client units of task execution persons. The function for allocating tasks to client units of task execution persons. The function for allocating tasks to client units of task execution persons allocates tasks based on the personal data file. The personal data file is renewed based on results of completed tasks.

159 citations


Patent
21 Nov 1996
TL;DR: In this paper, a control system is extremely flexible and modular and can be equipped or upgraded to have any number or combination of features, such as security, home theater/audio, HVAC, energy management, and lighting with each feature having a separate task unit.
Abstract: A control system is extremely flexible and modular and can be equipped or upgraded to have any number or combination of features, such as security, home theater/audio, HVAC, energy management, and lighting with each feature having a separate task unit. The task units can be added or removed from a core set of units within minimal impact on the core set of units. The core set of units includes a control database unit that stores sets of commands in a relational database according to an input/output event and a command execution unit that routes the commands to the appropriate task units for execution. The core set of units do not need to understand the input/output event or the commands but rather routes the commands to the task units addressed for execution. The system has a variable database that contains a relational database of variables shared between the various task units and stores such things as all keypad displays. The individual task units query the variable database unit for values of all shared variables and automatically receive any updates in values of the shared variables from the variable database unit. The software for the system is stored in flash ROM and can be automatically upgraded through a download task unit. The system may have an energy task unit that adjusts consumption of electricity based on any changes in rate and which provides a pathway of communication with the electrical utility company.

145 citations


Patent
29 Jan 1996
TL;DR: In this paper, a schedule-control method for managing and controlling projects is described, implemented on components including an electronic user interface, relational database, and computational component, which are designed to process input data in a well-defined format called a receivable/deliverable (rec/del) format.
Abstract: An schedule-control method for managing and controlling projects is described. The method is implemented on components including an electronic user interface, relational database, and computational component. These components are designed to process input data in a well-defined format called a receivable/deliverable (rec/del) format. Using this format, the project is broken down into a series of smaller components or "tasks". Each task involves a contract between a supplier and a receiver, and results in the production of a "product". Suppliers and receivers can enter up-to-the-minute input data in the rec/del format concerning a particular product. Input data are entered through the electronic user interface which can be e-mail or a user-interface computer program. Data are entered into tables of the relational database in the rec/del format. The input data are then rapidly processed with the computational component to generate output data indicating the status of the project.

143 citations


Patent
22 Feb 1996
TL;DR: In this article, a method for executing tasks in a multiprocessor system including a plurality of processors, each processor taking either an "idle" state and a "run" state, is presented.
Abstract: The present invention provides a method for executing tasks in a multiprocessor system including a plurality of processors, each processor taking either an "idle" state and a "run" state, wherein the method includes the steps of: detecting, when a first processor among the plurality of processors that is executing a first task newly generates a second task, whether or not a second processor taking the "idle" state exists among the plurality of processors; assigning the second task, if a second processor taking the "idle" state is detected, to the second processor, so as to begin execution of the second task by the second processor, change the state of the second processor from the "idle" state to the "run" state, and store a flag having a first value indicating that the execution of the first task has not been suspended; and suspending, if no second processor taking the "idle" state is detected, the execution of the first task by the first processor, beginning execution of the second task by the first processor, and storing a flag having a second value indicating that the execution of the first task has been suspended.

141 citations


Proceedings ArticleDOI
26 Feb 1996
TL;DR: It is shown that spatial joins are very suitable to be processed on a parallel hardware platform and the most efficient one shows an almost optimal speed up under the assumption that the number of disks is sufficiently large.
Abstract: We show that spatial joins are very suitable to be processed on a parallel hardware platform. The parallel system is equipped with a so called shared virtual memory which is well suited for the design and implementation of parallel spatial join algorithms. We start with an algorithm that consists of three phases: task creation, task assignment and parallel task execution. In order to reduce CPU and I/O cost, the three phases are processed in a fashion that preserves spatial locality. Dynamic load balancing is achieved by splitting tasks into smaller ones and reassigning some of the smaller tasks to idle processors. In an experimental performance comparison, we identify the advantages and disadvantages of several variants of our algorithm. The most efficient one shows an almost optimal speed up under the assumption that the number of disks is sufficiently large.

134 citations


Patent
13 Aug 1996
TL;DR: In this article, a process is disclosed to serialize instructions that are to be processed serially in a multiprocessor system, with the use of a token, where the token can be assigned on request to one of the processors, which thereupon has the right to execute the command.
Abstract: A process is disclosed to serialize instructions that are to be processed serially in a multiprocessor system, with the use of a token, where the token can be assigned on request to one of the processors, which thereupon has the right to execute the command. If the command consists of dristibuted tasks, the token remains blocked until the last dependent task belonging to the command has also been executed. It is only then that the token can be assigned to another instruction. Moreover, a device is described to manage this token, which features three states: a first state, in which the token is available, a second state, in which the token is assigned to one of the processors, and a third state, in which the token is blocked, because dependent tasks still have to be carried out. Moreover, a circuit is disclosed with which the token principle that is introduced can be implemented in a simple manner. The token is only available if none of the processors i is in possession of the token and if no dependent task is pending at any of the processors. The OR chaining of signals to form a signal C which is set if the token is not available represents the basic circuitry with which the serialisation of commands consisting of distributed tasks is carried out. The invention is applied particularly in the case of commands such as IPTE (invalidate page-table entry) and SSKE (set storage key extended), which modify the address translation tables in the memory that are used in common by all processors.

111 citations


Patent
Mohan Sharma1, Leo Yue Tak Yeung1
08 Mar 1996
TL;DR: In this article, the authors present a method, system and product for dynamically managing a pool of execution units in a server system, the pool devoted to a communication process between client and server processes.
Abstract: A method, system and product for dynamically managing a pool of execution units in a server system, the pool devoted to a communication process between client and server processes. A minimum and a maximum number of execution units in the communication process poolis established. The minimum number of execution units is the number necessary to support a typical client load. The maximum number of execution units is an upper bound to support a peak client load without overloading the server system. As client requests for service are received by the server system, a number of determinations are made. It is determined whether assigning an execution unit to the request would bring a current number of execution units in the communication process pool over the maximum number of execution units. If so, the client request is rejected. It is determined whether assigning an execution unit to the request would bring the number of assigned execution units to a client task making the request over an allotted number of execution units for the client task. If so, the client request is rejected. The client request if the determinations are negative thereby assigning an execution unit in the commnication process pool to the client request. The number of unused execution units in the communication pool is periodically reviewed to determine whether it should be increased or decreased to improve system performace.

101 citations


Journal ArticleDOI
TL;DR: NASA-TLXを用いて3種類の室内実験課題においても, 困難度の変化に感度よく対応したTLX得点が得られた.
Abstract: 日本語版NASA-TLXを用いて3種類の室内実験課題のワークロード測定を行ったところ, いずれの課題においても, 困難度の変化に感度よく対応したTLX得点が得られた. また, 尺度の重要度に関する一対比較から算出される尺度の重みが課題ごとに異なるパターンを示すことから, 3つの実験課題は作業負荷の面で相互に性質の異なるものであったことが検証され, TLX得点が性質の異なる種々の作業に適用可能であることが確認された. さらに, 相関分析や重回帰分析を行って, その結果をオリジナル版のNASA-TLXに関する報告と比較した. 最後に, 一対比較を省略した簡便法から得られる各種指標を推定したところ, 被験者間の変動は正規のNASA-TLX得点であるWWLと同様またはむしろやや小さく, WWLや“全体的負荷”との相関も高いことが明らかになった.

93 citations


Journal ArticleDOI
TL;DR: An important approach to high-performance computing is to construct a heterogeneous computing (HC) environment, consisting of a variety of machines interconnected by high-speed links, orchestrated to perform an application whose subtasks have diverse execution requirements.
Abstract: As a result of advances in high-speed digital communications, researchers have begun to use collections of different high-performance machines in concert to execute computationally intensive application tasks. Existing high-performance machines typically achieve only a fraction of their peak performance on certain portions of such application programs; that is, there is a gap between average sustained performance and the machine’s peak performance. One reason for this is that different subtasks of an application can have different computational requirements that are best processed by different types of machine architectures. Thus, an important approach to high-performance computing is to construct a heterogeneous computing (HC) environment, consisting of a variety of machines interconnected by high-speed links, orchestrated to perform an application whose subtasks have diverse execution requirements. In addition to how well a subtask matches a machine, many factors must be considered to exploit optimally the power of an HC suite of machines. These include the time to move data shared by subtasks executed on different machines, the operating system overhead involved in stopping a task on one machine and restarting it on another, the ability to execute subtasks concurrently on all or some subset of the machines in the suite, and the machine and intermachine network load caused by other users of the HC system. There are many instances of successful implementations of application tasks across suites of heterogeneous machines. Typically, however, current users of HC systems must decompose the application task into appropriate subtasks themselves, decide on which machine to execute each subtask, code each subtask specifically for its target machine, and determine the relative execution schedule for the subtasks. The automation of this process is a long-term goal in the field of HC, but research conducted toward this goal should produce tools that will aid the users of HC systems until full automation is possible. Even though the field of HC is relatively new, it is very active. This paper is a brief outline of the software support challenges for HC addressed in Siegel et al. [1996], and readers interested in more details are referred to Eshaghian [1996], Freund and Siegel [1993], Freund and Sunderam [1994], Siegel et al. [1996], and Sunderam [1995]. The first step in using an HC system is to construct the application program. A programming language used in an HC environment must be compilable into efficient code for any machine in the HC suite, and the program specification should facilitate the decomposition of an application task into appropriate subtasks. One model for automated HC consists of four stages: (1) determina-

01 Jan 1996
TL;DR: A new allocation strategy is introduced, the Bold strategy, which outperforms other strategies suggested in the literature in a number of simulations and is shown to achieve a small expected overall finishing time.
Abstract: We study a scheduling or allocation problem with the following characteristics: The goal is to execute a number of unspecified tasks on a parallel machine in any order and as quickly as possible. The tasks are maintained by a central monitor that will hand out batches of a variable number of tasks to requesting processors. A processor works on the batch assigned to it until it has completed all tasks in the batch, at which point it returns to the monitor for another batch. The time needed to execute a task is assumed to be a random variable with known mean and variance, and the execution times of distinct tasks are assumed to be independent. Moreover, each time a processor obtains a new batch from the monitor, it suffers a fixed delay. The challenge is to allocate batches to processors in such a way as to achieve a small expected overall finishing time. We introduce a new allocation strategy, the Bold strategy, and show that it outperforms other strategies suggested in the literature in a number of simulations.

Book ChapterDOI
04 Nov 1996
TL;DR: A system that can generate and execute plans for multiple interacting goals which arrive asynchronously and whose task structure is not known a priori, interrupting and suspending tasks when necessary, and a system which can compensate for minor problems in its domain knowledge.
Abstract: This paper describes ROGUE, an integrated planning and executing robotic agent. ROGUE is designed to be a roving office gopher unit, doing tasks such as picking up & delivering mail and returning & picking up library books, in a setup where users can post tasks for the robot to do. We have been working towards the goal of building a completely autonomous agent which can learn from its experiences and improve upon its own behaviour with time. This paper describes what we have achieved to-date: (1) a system that can generate and execute plans for multiple interacting goals which arrive asynchronously and whose task structure is not known a priori, interrupting and suspending tasks when necessary, and (2) a system which can compensate for minor problems in its domain knowledge, monitoring execution to determine when actions did not achieve expected results, and re-planning to correct failures.

Proceedings Article
01 Jan 1996
TL;DR: A new heuristic model, the HMLM/SA, is presented, which performs static allocation of such program modules in a heterogeneous distributed computing system in a manner that is designed to minimize the application program's parallel execution time.
Abstract: In heterogeneous distributed computing systems, partitioning of the application software into modules and the proper allocation of these modules among dissimilar processors are important factors which determine the efficient utilization of resources. This paper presents a new heuristic model, the HMLM/SA, which performs static allocation of such program modules in a heterogeneous distributed computing system in a manner that is designed to minimize the application program's parallel execution time. The new methodology augments the Maximally Linked Module concept by using stochastic techniques and by adding constructs which take into account the limited and uneven distribution of hardware resources often associated with heterogeneous systems. The execution time of the resulting HMLM/SA algorithm and the quality of the allocations produced are shown to be superior to that of the base HMLM algorithm, pure simulated annealing and the randomized algorithm when they were applied to randomly-generated systems and synthetic structures which were derived from real-world problems.

Proceedings ArticleDOI
20 Sep 1996
TL;DR: This paper presents a two-phase scheme for instruction selection which exploits available instruction-level parallelism, and may significantly increase the code quality compared to previous work, which is demonstrated for a widespread DSP.
Abstract: We address the problem of instruction selection in code generation for embedded digital signal processors. Recent work has shown that this task can be efficiently solved by tree covering with dynamic programming, even in combination with the task of register allocation. However, performing instruction selection by tree covering only does not exploit available instruction level parallelism, for instance in form of multiply-accumulate instructions or parallel data moves. In this paper we investigate how such complex instructions may affect detection of optimal tree covers, and we present a two-phase scheme for instruction selection which exploits available instruction-level parallelism. At the expense of higher compilation time, this technique may significantly increase the code quality compared to previous work, which is demonstrated for a widespread DSP.

Proceedings ArticleDOI
06 Aug 1996
TL;DR: The work required to modify, an MPI implementation to allow for task migration is described and "Hector", the heterogeneous computing task allocator, is described that is used to migrate tasks automatically and improve the overall performance of a parallel program.
Abstract: In order to use networks of workstations in parallel processing applications, several schemes have been devised to allow processes on different, possibly heterogeneous, platforms to communicate with one another. The Message-Passing Interface (MPI) is one such scheme that allows for message-passing across different architectures. The MPI specification does not make provisions for the migration of a process between machines. This paper describes the work required to modify, an MPI implementation to allow for task migration. It also describes "Hector", our heterogeneous computing task allocator that is used to migrate tasks automatically and improve the overall performance of a parallel program.

Journal ArticleDOI
01 Aug 1996
TL;DR: The system software for heterogeneous computing systems is presented according to an original three-dimensional (3-D) taxonomy whose criteria rely on the level of heterogeneity support implementation, the programming approach, and the data access technique applied.
Abstract: This survey of heterogeneous computing concepts and systems is based on the recently proposed by the authors "EM/sup 3/ " (Execution Modes/Machine Models) taxonomy of computer systems in general. The taxonomy is based on two criteria: the number of execution modes supported by the system and the number of machine models present in the system. Since these two criteria are orthogonal, four classes exist: Single Execution mode/Single machine Model (SESM), Single Execution modes/Multiple machine Models (SEMM), Multiple Execution modes/Single machine Model (MESM), and Multiple Execution modes/Multiple machine Models (MEMM). In Section II, heterogeneous computing concepts are viewed through three phases of the compilation and execution of any heterogeneous application: parallelism detection, parallelism characterization and resource allocation. Parallelism detection phase discovers fine-grain parallelism inside every task. This phase is not an exclusive feature of heterogeneous computing, so it will not be dealt with in greater detail. The assignment of parallelism characterization phase is to estimate the behavior of each task in the application on every architecture in the heterogeneous system. In the parallelism characterization domain, one original taxonomy is given. This taxonomy contains scheme classes such as vector and matrix static and dynamic, implicit and explicit, algorithmic and heuristic and numeric and symbolic. Resource allocation phase determines the place and the moment for execution of every task to optimize certain performance measure related to some criteria. In the resource allocation domain, the existing Casavant-Kuhl taxonomy is extended and used. This well known taxonomy is supplemented with scheme classes such as noncooperative competitive, noncooperative noncompetitive, and load sharing. In Section III, heterogeneous systems characterized with multiple execution modes ("fully" heterogeneous systems falling in the MESM and the MEMM class) are surveyed. The MESM class systems are described and illustrated with three case studies, two of which support SIMD/MIMD and one supports scalar/vector combination of execution modes. The MEMM class systems are described and illustrated with two representative examples of fully heterogeneous networks supporting multiple execution modes. The system software for heterogeneous computing systems is presented according to an original three-dimensional (3-D) taxonomy whose criteria rely on the level of heterogeneity support implementation, the programming approach, and the data access technique applied. In Section III, several representative heterogeneous applications are described with their computation requirements and the systems used for their execution. Each topic covered in the paper contains several concise examples.

Proceedings ArticleDOI
17 Nov 1996
TL;DR: This paper shows how a coordination library implementing the Message Passing Interface (MPI) can be used to represent common parallel program structures, and concludes that this synergistic combination of two parallel programming standards represents a useful approach to task parallelism in a data-parallel framework.
Abstract: High Performance Fortran (HPF) does not allow efficient expression of mixed task/data-parallel computations or the coupling of separately compiled data-parallel modules. In this paper, we show how a coordination library implementing the Message Passing Interface (MPI) can be used to represent these common parallel program structures. This library allows data-parallel tasks to exchange distributed data structures using calls to simple communication functions. We present microbenchmark results that characterize the performance of this library and that quantify the impact of optimizations that allow reuse of communication schedules in common situations. In addition, results from two-dimensional FFT, convolution, and multiblock programs demonstrate that the HPF/MPI library can provide performance superior to that of pure HPF. We conclude that this synergistic combination of two parallel programming standards represents a useful approach to task parallelism in a data-parallel framework, increasing the range of problems addressable in HPF without requiring complex compiler technology.

Patent
29 Aug 1996
TL;DR: In this paper, a multitasking data processing system having a plurality of tasks and a shared resource and a method of controlling allocation of shared resources within a multi-task data processing systems are disclosed.
Abstract: A multitasking data processing system having a plurality of tasks and a shared resource and a method of controlling allocation of shared resources within a multitasking data processing system are disclosed. In response to a resource request for a portion of a shared resource by a particular task among the plurality of tasks, a determination is made whether or not granting the resource request would cause a selected level of resource allocation to be exceeded. In response to a determination that granting the resource request would not cause the selected level of resource allocation to be exceeded, the resource request is granted. However, in response to a determination that granting the resource request would cause the selected level of resource allocation to be exceeded, execution of the particular task is suspended for a selected penalty time. In one embodiment of the present invention, the shared resource is a memory.

Patent
Yuji Yokoya1
18 Jul 1996
TL;DR: In this article, a processor allocating apparatus employed in a multiprocessor system capable of executing a plurality of tasks in a parallel manner is discussed, where a compiler compiles a source program of a program constructed of parallel tasks to produce a target program 3 and also produces a communication amount table for tasks, which holds therein a data amount of communication process operations executed among the respective tasks of the parallel tasks.
Abstract: In a processor allocating apparatus employed in a multiprocessor system capable of executing a plurality of tasks in a parallel manner, a compiler compiles a source program of a program constructed of parallel tasks to produce a target program 3, and also to produce a communication amount table for tasks, which holds therein a data amount of communication process operations executed among the respective tasks of the parallel tasks While referring to both the communication amount table for tasks, and a processor communication cost table for defining data communication time per unit data in sets of all processors employed in the scheduler makes a decision such that a processor where communication time among the tasks becomes minimum is allocated to the task of the parallel tasks, and registers this decision to a processor management table

Patent
20 Dec 1996
TL;DR: In this article, a computer-implemented method of managing a computer network including a plurality of devices is provided, wherein a pluralityof network management tasks are performable upon the devices.
Abstract: A computer-implemented method of managing a computer network including a plurality of devices is provided, wherein a plurality of network management tasks are performable upon the devices. Data is gathered about a present configuration of the network, including the types of devices in the network, the quantity of each type of device present in the network, the relationships between the devices, and the tasks performable upon each of the devices. The data is then stored in a database representing a network map. A display is generated corresponding to the network map using the data in the database (200). The display shows an association of the devices (201) with the tasks performable on the devices (202) using bitmap representations (205, 207) of the devices and tasks. The display may include hierarchical, schematic, or geographical representations of the devices on the network (201). The devices are organized into a plurality of groups. In response to a user input selecting a device or group, the tasks performable by that device or group are identified on the display (209, 211). A user may initiate any one of the displayed tasks by applying a user input selecting that task.

Proceedings ArticleDOI
03 Jan 1996
TL;DR: The paper addresses the jitter control issue in time-triggered real-time systems where tasks are activated according to time and presents solutions that can prevent jitters, as well as other solutions that may result in minor jitters.
Abstract: The paper addresses the jitter control issue in time-triggered real-time systems where tasks are activated according to time. Our model is based on the temporal distance constrained task model (DCTS). However, many real-time systems have tasks that do not fit the simple assumptions of the DCTS model because tasks may occasionally access shared resources, and certain tasks may arrive irregularly. We address the issues that cause jitters in DCTS. We first review the algorithm developed for DCTS and present an extension for an optimal algorithm. Three jitter issues are then discussed: sharing resources between tasks, handling tasks that have fixed and unchangeable periods, and handling tasks that arrive aperiodically. We present solutions that can prevent jitters, as well as other solutions that may result in minor jitters.

Patent
05 Jan 1996
TL;DR: In this paper, the authors propose to suppress the power consumption of the whole system by positively generating inactive processors even in a state where the number of tasks is over that of processors, and the other processor is brought into an inactive state so that power source supply for the inactive processor is stopped or a clock frequency is lowered.
Abstract: PROBLEM TO BE SOLVED: To effectively suppress the power consumption of the whole system by positively generating inactive processors even in a state where the number of tasks is over that of processors. SOLUTION: In this information processing system, OS previously knows the resource request quantity of the processors 11 and 12 of the processing unit of each task including OS itself to centralize the group of tasks to the specific processor 11 within a range in which the resources of the processors 11 and 12 are not short (a range in which the sum of the request quantity of the processor resources is not over 100). Thereby, the other processor 12 is brought into an inactive state so that power source supply for the inactive processor is stopped or a clock frequency is lowered. Thereby the power consumption of the whole system is effectively suppressed. COPYRIGHT: (C)1997,JPO

Patent
George Kraft1, John Anthony Moore1
31 Oct 1996
TL;DR: In this paper, a system and method for automatically adjusting priority assigned to execution of applications, tasks, or workspaces is presented, where a display of visual indicators is provided, corresponding to a differing task.
Abstract: A system and method for automatically adjusting priority assigned to execution of applications, tasks, or workspaces. A display of visual indicators is provided, corresponding to a differing task. By selecting an indicator, priority given to task execution is altered as the task is moved into a focused state as a result of such selection. A window manager between a server and application registers in the server adjusted state of a particular application as either in focus or cleared. An application, through its corresponding window-id, detects from the server that an adjustment in priority is desired. A mapping function such as a lookup table maps the window id to a corresponding process-id which is then utilized by the application in a process table. The information from the window manager passed through the display server is utilized by the application to adjust its own priority relative to the remaining applications in the operating system's process table. A WM-- PROCESS atom is introduced to the X Server for the window-id to/from process-id mapping. CPU resource directed to the particular application as a result of the priority alteration is thereby altered. A focused application is dynamically provided with more CPU resource relative to remaining tasks, applications, or suites thereof associated with a workspace executing in the multitasking environment.

Patent
George Kraft1, John Anthony Moore1
31 Oct 1996
TL;DR: In this article, a system and method for automatically adjusting priority assigned to execution of applications, tasks, or workspaces to improve performance relative to other such applications, task or workspace in a computerized multitasking graphical user interface environment is provided.
Abstract: A system and method are provided for automatically adjusting priority assigned to execution of applications, tasks, or workspaces to thereby improve performance relative to other such applications, tasks or workspaces in a computerized multitasking graphical user interface environment. A display of a plurality of visual indicators is provided, each of which corresponds to a differing task. By selection of one of the indicators, the priority given to execution of the task is altered as the task is thereby moved into a focused state as a result of such selection. A window manager interposed between a server and application registers in the server the adjusted state of a particular application as being either set in focus or cleared. An application may detect from the server a window-id corresponding to the application for which an adjustment in priority is desired. A mapper function, lookup table, or the like for mapping window-id to a corresponding process-id is obviated as a result of employing messaging/signalling. The amount of CPU resource then directed to the particular application as a result of the priority alteration is thereby in turn altered. In this manner, a focused application is dynamically provided with more CPU resource relative to remaining tasks, applications, or suites thereof associated with a workspace executing in the multitasking environment.

Proceedings ArticleDOI
03 Jan 1996
TL;DR: While process migration initial costs are less than 17% of task migration costs, the expected benefits can easily be as high as a 50% improvement in run-time cost over task migration.
Abstract: This paper describes task and process migration for the OSF/1 AD Operating System (OS server for massively parallel processors and clusters of workstations. OSF/1 AD, runs in user space, on top of the Mach microkernel. A process in the OSF/1 AD server is composed of a Mach task state (memory, capabilities, threads, etc.) and of the UNIX-related state (open file descriptors, signal masks, etc.). Process migration invokes task migration to transfer the Mach task state, and supports transparent transfer of the UNIX process state. Process and task migration rely on a single system image provided at both OSF/1 AD and Mach levels. We compare the initial and run-time costs of task and process migration for several programs with different needs. While process migration initial costs are less than 17% of task migration costs, the expected benefits can easily be as high as a 50% improvement in run-time cost over task migration. We conducted the experiments on a cluster of PCs; however, results are also applicable to massively parallel processors.

Patent
30 Sep 1996
TL;DR: In this article, the optimal utilization of all active processors in a multiprocessor system is achieved for processing static task packets, where the processors wait at a synchronization point until conditions determined by the synchronization point have been satisfied.
Abstract: A multiprocessor system is disclosed wherein optimum utilization of all active processors is achieved for processing of static task packets. Active processors in a multiprocessor system independently fetch tasks from a central task control table of common memory for processing. The processors wait at a synchronization point until the conditions determined by the synchronization point have been satisfied.

Patent
Mark S. Shipman1
10 Dec 1996
TL;DR: In this article, a computer system is programmed with basic input/output services (BIOS), including an initialization service, and an associated virtual mode execution monitor, and the initialization service scans for option ROMs of add-on devices at power on/reset.
Abstract: A computer system is programmed with basic input/output services (BIOS), including an initialization service, and an associated virtual mode execution monitor. The initialization service scans for option ROMs of add-on devices at power on/reset. For each detected option ROM, the initialization service creates the runtime definition of its initialization task, setting up the initialization task to be executed in a virtual mode, redirecting all interrupts and exceptions arisen during execution of the initialization task to the virtual mode execution monitor. For each redirected interrupt/exception, the virtual mode execution monitor either allows the triggering attempted operation to be performed, or substitutes an impermissible triggering attempted operation with one or more fail safe recovery operations, or simply terminates the "ill-behaving" initialization task.

01 Jun 1996
TL;DR: The design of SMART is presented and measured performance results of its effectiveness based on a prototype implementation in the Solaris operating system are provided.
Abstract: We have created SMART, a Scheduler for Multimedia And Real-Time applications SMART supports both real-time and conventional computations and provides flexible and accurate control over the sharing of processor time SMART is able to satisfy real-time constraints in an optimal manner and provide proportional sharing across all real-time and conventional tasks Furthermore, when not all real-time constraints can be met, SMART satisfies each real-time task''s proportional share of deadlines, and adjusts its execution rate dynamically This technique is especially important for multimedia applications that can operate at different rates depending on the loading condition This paper presents the design of SMART and provides measured performance results of its effectiveness based on a prototype implementation in the Solaris operating system

Proceedings ArticleDOI
07 May 1996
TL;DR: The authors assume agents to be computer programs that help a user to perform a task or set of tasks and consider primitives for agent communication, interaction styles and blackboard systems.
Abstract: The authors assume agents to be computer programs that help a user to perform a task or set of tasks. They consider primitives for agent communication, interaction styles and blackboard systems.