scispace - formally typeset
Search or ask a question

Showing papers on "Task (computing) published in 1999"


Patent
11 May 1999
TL;DR: In this paper, a layered network model is utilized in which computing tasks that are typically performed in network applications are instead offloaded to the network interface card (NIC) peripheral.
Abstract: Offloading specific processing tasks that would otherwise be performed in a computer system's processor and memory, to a peripheral device. The computing task is then performed by the peripheral, thereby saving computer system resources for other computing tasks. In one preferred embodiment, the disclosed method is utilized in a layered network model, wherein computing tasks that are typically performed in network applications are instead offloaded to the network interface card (NIC) peripheral.

199 citations


Journal ArticleDOI
TL;DR: A method to determine an allocation that introduces safety into a heterogeneous distributed system and at the same time attempts to maximize its reliability is described, and a new heuristic, based on the concept of clustering, to allocate tasks for maximizing reliability is devised.
Abstract: Distributed computer systems are increasingly being employed for critical applications, such as aircraft control, industrial process control, and banking systems. Maximizing performance has been the conventional objective in the allocation of tasks for such systems. Inherently, distributed systems are more complex than centralized systems. The added complexity could increase the potential for system failures. Some work has been done in the past in allocating tasks to distributed systems, considering reliability as the objective function to be maximized. Reliability is defined to be the probability that none of the system components falls while processing. This, however, does not give any guarantees as to the behavior of the system when a failure occurs. A failure, not detected immediately, could lead to a catastrophe. Such systems are unsafe. In this paper, we describe a method to determine an allocation that introduces safety into a heterogeneous distributed system and at the same time attempts to maximize its reliability. First, we devise a new heuristic, based on the concept of clustering, to allocate tasks for maximizing reliability. We show that for task graphs with precedence constraints, our heuristic performs better than previously proposed heuristics. Next, by applying the concept of task-based fault tolerance, which we have previously proposed, we add extra assertion tasks to the system to make it safe. We present a new heuristic that does this in such a way that the decrease in reliability for the added safety is minimized. For the purpose of allocating the extra tasks, this heuristic performs as well as previously known methods and runs an order of magnitude faster. We present a number of simulation results to prove the efficacy of our scheme.

145 citations


Patent
Jan Van Ee1, Yevgeniy E. Shteyn1
09 Dec 1999
TL;DR: Tasking systems and methods are provided that support user interfaces for displaying objects, the displayed objects enabling user access to resources that provide for effecting tasks among the system and devices of the systems' environment as discussed by the authors.
Abstract: Tasking systems and methods are provided that support user interfaces for displaying objects, the displayed objects enabling user access to resources that provide for effecting tasks among the system and devices of the systems' environment. More particularly, tasking systems and methods are provided that support the foregoing features, wherein the systems and methods support clustering operations respecting such task-associated objects so as to enhance the effecting of the associated tasks, such clustering operations responding to context. The clustering operations preferably are both adaptive and dynamic. Tasking systems and methods preferably support the tracking of selected states, including, as examples, one or more of environment states, device states, and system states. Tracked states typically also include states respecting other relevant criteria, such as temporal criteria.

105 citations


Patent
03 Jun 1999
TL;DR: In this article, a method for enhancing concurrency in a multiprocessor computer system is described, where various tasks in a computer system communicate using commonly accessible mailboxes to access valid data from a location in the mailbox.
Abstract: A method for enhancing concurrency in a multiprocessor computer system is described. Various tasks in a computer system communicate using commonly accessible mailboxes to access valid data from a location in the mailbox. A task holding the valid data places that data in a mailbox, and other tasks read the valid data from the mailbox. The task that inputs the valid data into the mailbox is caused to notify other tasks addressing the mailbox that the valid data is contained therein. Thus, no central coordination of mailbox access is required. Further, busy waits for valid data are minimized and the ressources of tasks and processors are used more efficiently. By coordinating several lists associated with each mailbox, conflicts in accessing data and delays in obtaining data are also minimized.

90 citations


Proceedings ArticleDOI
13 Dec 1999
TL;DR: This paper presents a framework for dynamically allocating the CPU resource to tasks whose execution times are not known a priori and shows how to adjust the fraction of the CPU bandwidth assigned to each task using a feedback mechanism.
Abstract: In this paper we present a framework for dynamically allocating the CPU resource to tasks whose execution times are not known a priori. Tasks are partitioned in three classes: the ones that require a uniform execution but do not impose any temporal constraint, periodic tasks that operate on continuous media, and event driven tasks that respond to external interrupts. For the last two classes, we show how to adjust the fraction of the CPU bandwidth assigned to each task using a feedback mechanism.

83 citations


Journal ArticleDOI
TL;DR: It is shown that no algorithm exists for deciding whether a finite task for three or more processors is wait-free solvable in the asynchronous read-write shared-memory model, which implies that there is no constructive (recursive) characterization of wait- free solvable tasks.
Abstract: We show that no algorithm exists for deciding whether a finite task for three or more processors is wait-free solvable in the asynchronous read-write shared-memory model. This impossibility result implies that there is no constructive (recursive) characterization of wait-free solvable tasks. It also applies to other shared-memory models of distributed computing, such as the comparison-based model.

77 citations


Journal ArticleDOI
TL;DR: A new approach for developing adaptive Web based courses is described by means of teaching tasks which correspond to basic knowledge units, and rules which describe how teaching tasks are divided into subtasks.

67 citations


Patent
26 May 1999
TL;DR: In this article, a portable electronic device (10) assists persons with learning disabilities and attention deficit disorders in performing daily living tasks such as making a bed, applying makeup, brushing teeth, getting dressed, and eating a meal.
Abstract: A portable electronic device (10) assists persons with learning disabilities and attention deficit disorders in performing daily living tasks. These tasks can include, e.g., making a bed, applying makeup, brushing teeth, getting dressed, and eating a meal, or hundreds of other tasks. The device is designed to allow users to develop a personal schedule of these tasks and special events. It alerts users at predetermined times to perform scheduled tasks and coaches and motivates the user in completing the tasks through text, audio and animation. The user is given a predetermined period of time to complete the task and rewarded with points if the task is completed on time. The device also records the user's performance of tasks and creates a task log of the user's performance over a given period of time.

64 citations


Journal ArticleDOI
TL;DR: The performances of a genetic algorithm, a commercial 0–1 integer programming software and a hybrid approach from the literature, in solving real instances of the problem of designing a distributed computing system for handling a set of repetitive tasks on a periodic basis are compared.
Abstract: We consider the problem of designing a distributed computing system for handling a set of repetitive tasks on a periodic basis. Tasks assigned to different processors need communication link capacity, tasks executing on the same processor do not. The aim is to develop a design of minimum total cost that can handle all the tasks. We compare the performances of a genetic algorithm, a commercial 0–1 integer programming software and a hybrid approach from the literature, in solving real instances of the problem. Copyright © 1999 John Wiley & Sons, Ltd.

62 citations


Patent
16 Nov 1999
TL;DR: In this paper, an ultrasound system operating on a personal computer architecture comprising multiple processors controlled to operate in parallel to share ultrasound operations of the system is described. But, the authors do not specify how to assign each task to a CPU.
Abstract: An ultrasound system operating on a personal computer architecture comprising multiple processors controlled to operate in parallel to share ultrasound operations of the system. The multiple processors are controlled by software to share the operations associated with system setup, system control, scanning, data acquisition, beamforming, user interface service, signal processing, and scan conversion. The ultrasound system utilizes management software which divides operations associated with each function (such as signal processing and scan conversion) into parallel sub-operations or tasks. Each task is assigned by the operating system to a CPU. Any of the CPUs may be capable of performing any of the tasks. The CPUs operate in parallel to carry out the assigned tasks. Once all of the CPUs have completed the assigned tasks, the system may serially advance to the next ultrasound function.

62 citations


Patent
Eric Bauer1, Gang Yang1
26 Oct 1999
TL;DR: In this article, a real-time admission control scheme is proposed, where the current resource utilization is evaluated in real time and admission control decisions are based on this realtime evaluation.
Abstract: A real-time admission control scheme is proposed. The real-time admission control scheme of the present invention does not rely on pre-determined threshold limits. Instead, the current resource utilization is evaluated in real time and admission control decisions are based on this real-time evaluation. As soon as a new service request is received, a real-time evaluation of the current resource utilization of each active task (i.e., in all classes, not just the corresponding class) that consumes resources is made. These tasks include the scheduled service requests (e.g., point-to-point calls, conference calls) as well as failure recovery functions. Then, total resource utilization is computed by summing the resource utilization of each active task. Accordingly, a measure of the available system resources is computed.

Patent
07 Aug 1999
TL;DR: In this paper, the authors present a network verification tool that allows a user to easily create tasks for a collection of task types, hosted by probe network devices that are coupled to a network under test.
Abstract: A network verification tool is presented that allows a user to easily create tasks for a collection of task types. The collection of task types are hosted by probe network devices that are coupled to a network under test. The network under test includes network devices executing generic network software and particular hardware or software that is being tested. The probe network devices are coupled to an NVT server, which transmits tasks to the task types and interfaces with one or more NVT clients. The NVT clients can create tasks by entering the appropriate parameters within templates supplied by the NVT server. Any collection of task types can be included in the network verification tool, including traffic generators, traffic analyzers, large network emulators, session emulators, device queries and script tasks.

Journal ArticleDOI
01 Jun 1999
TL;DR: An integrated system in which an operator uses a simulated environment to program part-mating and contact tasks, which aims to make robotic programming easy and intuitive for untrained users working with standard desktop hardware.
Abstract: We present an integrated system in which an operator uses a simulated environment to program part-mating and contact tasks. Generation of models within this virtual environment is facilitated using a fast, occlusion tolerant, 3D grey-scale vision system which can recognize and accurately locate objects within the work site. A major goal of this work is to make robotic programming easy and intuitive for untrained users working with standard desktop hardware. Simulation offers the ease-of-use benefits of "programming by demonstration", coupled with the ability to create a programmer-friendly virtual environment. Within a simulated environment, it is also straightforward to track and interpret an operator's actions. The simulator models objects as polyhedra and implements full 3D contact dynamics. When a manipulation task is completed, local planning techniques are used to turn the virtual environment's motion sequence history into a set of robot motion commands capable of realizing the prescribed task.

Patent
Scott T. Marcotte1
25 Mar 1999
TL;DR: In this paper, the authors describe a queue of update requests waiting in a lock to the shared resource while the lock is held, and others deferred for post processing after a lock is released.
Abstract: Tasks make updates requested by calling tasks to a shared resource serially in a first come first served manner, atomically, but not necessarily synchronously, such that a current task holding an exclusive lock on the shared resource makes the updates on behalf of one or more calling tasks queued on the lock. Updates waiting in a queue on the lock to the shared resource may be made while the lock is held, and others deferred for post processing after the lock is released. Some update requests may also, at the calling application's option, be executed synchronously. Provision is made for nested asynchronous locking. Data structures (wait_elements) describing update requests may queued in a wait queue for update requests awaiting execution by a current task, other than the calling task, currently holding an exclusive lock on the shared resource. Other queues are provided for queuing data structures removed from the wait queue but not yet processed; data structures for requests to unlock or downgrade a lock; data structures for requests which have been processed and need to be returned to free storage; and data structures for requests that need to be awakened or that describe post processing routines that are to be run while the lock is not held.

Patent
11 Mar 1999
TL;DR: In this article, each task is allocated to one of the at least one DSP according to a total current task processing load for each of the DSPs, a maximum processing capability for each DSP, and a processing requirement for each task being allocated to the one of DSP that can handle the additional processing load of the task.
Abstract: A communication system ( 100 ) includes at least one digital signal processor (DSP) and a WAN driver ( 80 ) operating on a processor that is electrically coupled to a memory. The WAN driver ( 80 ) receives task allocation requests from a host to open/close communication channels that are handled by the at least one DSP. Each task is allocated to one of the at least one DSP according to a total current task processing load for each of the at least one DSP, a maximum processing capability for each of the at least one DSP, and a processing requirement for each task being allocated to the one of the at least one DSP that can handle the additional processing load of the task being allocated. A configuration controller ( 92 ) keeps track of the MIPs processing requirement of each task available for allocation across the plurality of DSPs and the maximum processing capability of each DSP of the plurality of DSPs in response to changes in configuration of the communication system ( 100 ).

Patent
13 Aug 1999
TL;DR: In this paper, the main processor sequences a plurality of tasks to be executed to complete an operation, and the operations sequencer coordinates an execution of the plurality of task executions, which offloads at least some of the processor overhead to improve processor efficiency.
Abstract: The present invention provides storage system controllers and methods of controlling storage systems therewith. The controller ( 10 ) includes a main processor ( 12 ), a memory ( 14 ), a device interface ( 18 ) adapted to interface a peripheral component ( 28-32 ), such as a RAID storage device, with the storage system controller, and an operations sequencer ( 24 ). The main processor sequences a plurality of tasks to be executed to complete an operation. The operations sequencer coordinates an execution of the plurality of tasks. Methods of the invention include receiving a task status for each of the plurality of tasks that is executed, and issuing an interrupt to the main processor after all of the plurality of tasks of the operation are finished executing. In this manner, the operations sequencer offloads at least some of the main processor overhead to improve processor efficiency.

Patent
Shell S. Simpson1
09 Jul 1999
TL;DR: In this article, a method and memory device enabling a user at a management computer to invoke a function on one or more plural managed units of a peripheral system, each managed unit providing one-or more services for one- or more client computers is presented.
Abstract: A method and memory device enabling a user at a management computer to invoke a function on one or more plural managed units of a peripheral system, each managed unit providing one or more services for one or more client computers. Each service has an associated managed entity (ME) interface that includes reference(s) to one or more management interfaces (MIs), each MI including one or more method(s) for controlling a managed unit to execute a desired action. The method responds to a user selecting an Operation object by (i) determining which MEs associated with the managed units support a designated MI object that will enable execution of the desired Operation, by invoking execution of one or more management interface provider objects on each managed unit to return an answer (ii) passing the list of determined MEs to an Operate method of the Operation object whose task is to perform the desired action by invoking methods on the designated MI(s) associated with one or more of the managed units; and (iii) executing the Operate method to initiate execution, in a managed unit associated with the one or more listed MEs, of said designated management interface object (or objects) which, in turn, cause performance of the desired action by the managed unit.

Proceedings ArticleDOI
09 Jun 1999
TL;DR: This work considers two varieties of global multiprocessor scheduling: in the frame-based model, an aperiodic task set is scheduled to create a template (frame), and that schedule may be executed periodically, and in the periodic model, each task in the set has a separate period, and is executed with no explicitly predetermined schedule.
Abstract: Many real-time multiprocessor scheduling techniques have been proposed to guarantee the timely execution of periodic preemptive real-time tasks. However timeliness is usually only guaranteed in the absence of faults, which may be unacceptable for some critical systems. We therefore address the problem of multiprocessor scheduling for preemptive real-time tasks so that the timeliness of the system can be guaranteed even in the presence of faults. This work focuses on global scheduling where tasks can migrate across processors. We consider two varieties of global multiprocessor scheduling: in the frame-based model, an aperiodic task set is scheduled to create a template (frame), and that schedule may be executed periodically. In the periodic model, each task in the set has a separate period, and is executed with no explicitly predetermined schedule. For each model, we show how to guarantee timely execution and recovery in the general case. We also propose solutions that improve upon this general case when all tasks require the same amount of time to recover from a fault.

Patent
07 Oct 1999
TL;DR: In this paper, a fair scheduler is proposed in which each task (TA to TE) is assigned a counter (CNT) and a threshold value (THD), with the threshold value specifying a maximum number of execute cycles within which the task need not be executed immediately.
Abstract: Instead of a conventional task scheduler in which tasks having a high priority are preferentially scheduled, so that the execution of tasks with very low priorities is blocked, a “fair scheduler” is proposed in which each task (TA to TE) is assigned a counter (CNT) and a threshold value (THD), with the threshold value specifying a maximum number of execute cycles within which the task need not be executed immediately, and the counter counting those execute cycles within which the task is not executed. At the beginning of each execute cycle, a test is made to determine whether one of the counters exceeds the associated threshold value. If that is the case, one (TD) of the corresponding tasks (TC, TD) is selected by a selection criterion and executed, and its counter is reset. The counters assigned to the remaining tasks are incremented by one, and the execute cycle is repeated if it is found that at least one more of the tasks (TC) is waiting to be processed.

Patent
Yoram Yeivin1, Eliezer Weitz1, Moti Kurnick1, Avi Shalev1, Avi Hagai1 
02 Apr 1999
TL;DR: In this article, a communication controller for handling high speed multi protocol data streams, wherein a stream is comprised of frames, is presented, where the second processor initializes first processor and handles high level management and protocol functions, first processor handles the data stream transactions.
Abstract: A communication controller for handling high speed multi protocol data streams, wherein a stream is comprised of frames. Communication controller has two processors, second processor initializes first processor and handles high level management and protocol functions, first processor handles the data stream transactions. First processor and second processors are coupled to a two external buses. First processor handles a transactions of a frame by executing a task. First processor performs a task switch when there is a need to fetch information from an external unit, coupled to either first or second external bus, if it did process a whole frame, or if there is a need to fetch a portion of a frame from a communication channel.

Patent
17 May 1999
TL;DR: In this paper, a system and method for handling laboratory information includes a graphical user interface with a plurality of windows, where a palette of icons is provided in a first one of the windows, each icon representing a predetermined task to be executed by a processor in communication with the graphical interface.
Abstract: A system and method for handling laboratory information includes a graphical user interface with a plurality of windows. A palette of icons is provided in a first one of the windows, each icon representing a predetermined task to be executed by a processor in communication with the graphical user interface. The processor is also in communication with a database containing static laboratory data (such as the type of sample to be analysed) as well as dynamic laboratory data (such as the name of the specific sample to be tested and the results of that test). A user can select icons from the first window and 'drag and drop' them into a second window. A sequence of tasks may thus be built up, in the form of a tree structure, and when run the processor executes the sequence of tasks in turn by reference to the static and dynamic laboratory data.

01 Jan 1999
TL;DR: In this article, the authors consider the problem of efficiently performing a set of tasks using a network of processors in the setting where the network is subject to dynamic reconfigurations, including partitions and merges.
Abstract: This work considers the problem of efficiently performing a set of tasks using a network of processors in the setting where the network is subject to dynamic reconfigurations, including partitions and merges. A key challenge for this setting is the implementation of dynamic load balancing that reduces the number of tasks that are performed redundantly because of the reconfigurations. We explore new approaches for load balancing in dynamic networks that can be employed by applications using a group communication service (GCS). The GCS that we consider include a membership service (establishing new groups to reflect dynamic changes) but does not include maintenance of a primary component. For the n-processor, n-task load balancing problem defined in this work, the following specific results are obtained. For the case of fully dynamic changes including fragmentation and merges we show that the termination time of any on-line task assignment algorithm is greater than the termination time of an off-line task assignment algorithm by a factor greater than n/12. We present a load balancing algorithm that guarantees completion of all tasks in all fragments caused by partitions with work O(n + f ċ n) in the presence of f fragmentation failures. We develop an effective scheduling strategy for minimizing the task execution redundancy and we prove that our strategy provides each of the n processors with a schedule of Θ(n1/3) tasks such that at most one task is performed redundantly by any two processors.

Patent
19 Mar 1999
TL;DR: In this paper, a linkage editor generates an output file by iteratively analyzing the program for references to other software components and extracting those components from their parent classes, sending the completed output file to an interface task which transmits it to the client.
Abstract: A linkage editor executing at a server receives instructions for packaging software components that are required for program execution at a client. The linkage editor generates an output file by iteratively analyzing the program for references to other software components and extracting those components from their parent classes. The linkage editor sends the completed output file to an interface task, which transmits it to the client.

Patent
Herman Rodriguez1
30 Jun 1999
TL;DR: In this paper, a scheduler program is used to configure the system to run certain recurring tasks but control system operation with varying results based on the controlling inducement factors received from the calendar program.
Abstract: A user's calendar program is configured to “induce” execution of scheduled programs or system activities. Utilizing a scheduler program, the user can configure the system to run certain recurring tasks but control system operation with varying results based on the controlling inducement factors received from the calendar program. When creating an event or activity entry in the calendar program, the user associates an inducement value with that entry. On the date of the entry, the scheduler program, before initiating execution of any scheduled tasks, obtains the inducement value(s) for that date. The inducement value(s) are logically combined with execution values to control execution of scheduled tasks. For instance, is a user is attending a remote conference, a recurring task for system backup and virus scanning will run on a different schedule that if the user is actively using the machine on a daily basis between backups, while if the user is on vacation, system pop-ups and dialogs for an application may not be executed.

Patent
16 Nov 1999
TL;DR: In this article, the authors present a method, system, and apparatus for loading and managing tasks within a process instance on a computer system, which distinguishes groups of threads as a task and manages the execution of the threads in the task in the manner specified in a configuration file.
Abstract: A method, system, and apparatus for loading and managing tasks within a process instance on a computer system. The present embodiment novelly distinguishes groups of threads as a task and manages the execution of the threads in the task in the manner specified in a configuration file. The configuration file contains names of tasks and configuration information associated with each task. For example the order of execution of tasks may be defined to depend on the progress of the execution, such as the state, of one or more other tasks. The termination of a single task or multiple tasks may be managed by the present embodiment. The output from a task may be directed to a computer-based input/output (I/O) device, such as a monitor, or to a file, or may be discarded.

Patent
19 May 1999
TL;DR: In this paper, the authors propose a method for assigning tasks for processing received from one or several client data processing nodes (C1 through C5) within a group of at least two server data processing (S1 through S4) to one of the S2s.
Abstract: The subject of the invention is a method for assigning tasks for processing received from one or several client data processing nodes (C1 through C5) within a group of at least two server data processing nodes (S1 through S4) to one of the server data processing nodes (S2), as well as a server data processing system, a client data processing node and a machine-readable storage medium for carrying out this process. A client data processing node (C2) that has a task to assign, first selects the server data processing node from the group that is the next server data processing node to be selected based on a predefined cyclical order. If the server data processing node that is selected first denies the processing of the task, the client data processing node randomly selects a different server data processing node for processing the task. Otherwise, the client data processing node assigns the task to the selected server data processing node.

Patent
Joseph Franklin Garvey1
25 Jun 1999
TL;DR: In this paper, a method for performing configuration tasks prior to and including memory configuration within a processor based system is disclosed, where a memory location is first reserved by a basic input/output system (BIOS) firmware for each individual BIOS task.
Abstract: A method for performing configuration tasks prior to and including memory configuration within a processor based system is disclosed. A memory location is first reserved by a basic input/output system (BIOS) firmware for each individual BIOS task. A target routine is then performed using the reserved memory location by the BIOS firmware. The target routine is designed to perform a specific BIOS task. Finally, the reserved memory location is released by the BIOS firmware, after the target routine has been successfully completed.

Patent
17 Mar 1999
TL;DR: A timer has at least two timing intervals, at least one of which is switchable between different preset intervals by the user, between which tasks are to be performed by a user.
Abstract: A timer has as at least two timing intervals, at least one of which is switchable between different preset intervals by the user, between which tasks are to be performed by a user. Upon the expiration of each timing interval, the user is alerted that a task is to be performed. A display panel indicates the number of days remaining in one of the timing cycles and may temporarily be switched to indicate the number of days remaining in the other timing interval. A particular use for the timer of this invention is to remind a user to service a humidifier. Another particular use is to remind a user of chores to be performed to maintain indoor or outdoor plants.

Patent
29 Jul 1999
TL;DR: In this article, a core program object interacts with and controls operation of a plurality of plug-in program objects operable to carry out data processing tasks, the apparatus providing for communication between the core program objects and each such data processing task.
Abstract: Data processing apparatus is disclosed in which a core program object interacts with and controls operation of a plurality of plug-in program objects operable to carry out data processing tasks, the apparatus providing for communication between the core program object and each such data processing task: (i) a synchronous interface to allow interaction between the core program object and a plug-in program object operable to carry out that task; and (ii) an asynchronous interface to allow interaction between the core program object and a hardware device operable to carry out that task.

Proceedings ArticleDOI
01 Jan 1999
TL;DR: The set of tasks which can be performed with different types of uncalibrated camera models is characterized in a constructive manner, providing a principle foundation both for a specification language and for automatic execution monitoring in uncAlibrated environments.
Abstract: Most of the work in robotic manipulation and visual servoing has emphasized how to specify and perform particular tasks. Recent results have formally shown what tasks are possible with uncalibrated imaging systems. This paper extends those results by characterizing in a constructive manner the set of tasks which can be performed with different types of uncalibrated camera models. The tasks resulting structure provides a principle foundation both for a specification language and for automatic execution monitoring in uncalibrated environments.