scispace - formally typeset
Search or ask a question
Topic

Task (computing)

About: Task (computing) is a research topic. Over the lifetime, 9718 publications have been published within this topic receiving 129364 citations.


Papers
More filters
Patent
13 Jul 1998
TL;DR: In this article, a system including a plurality of micro-processing units each operating under its own control program and being capable of performing at least one of the posted tasks was described.
Abstract: A system including a plurality of micro processing units each operating under its own control program and being capable of performing at least one of a plurality of tasks for manipulating electronic data, and an electronic bulletin board for posting the one or more tasks required to manipulate the electronic data, the posted tasks being readable by the micro-processing units, where at least one of the micro-processing units (capable of performing at least one of the posted tasks) executes that task on the electronic data in response to reading the electronic bulletin board and determining that the posted task should be executed.

115 citations

Proceedings Article
07 Sep 2019
TL;DR: The results show that LAMOL prevents catastrophic forgetting without any sign of intransigence and can perform five very different language tasks sequentially with only one model.
Abstract: Most research on lifelong learning applies to images or games, but not language. We present LAMOL, a simple yet effective method for lifelong language learning (LLL) based on language modeling. LAMOL replays pseudo-samples of previous tasks while requiring no extra memory or model capacity. Specifically, LAMOL is a language model that simultaneously learns to solve the tasks and generate training samples. When the model is trained for a new task, it generates pseudo-samples of previous tasks for training alongside data for the new task. The results show that LAMOL prevents catastrophic forgetting without any sign of intransigence and can perform five very different language tasks sequentially with only one model. Overall, LAMOL outperforms previous methods by a considerable margin and is only 2-3% worse than multitasking, which is usually considered the LLL upper bound. The source code is available at this https URL.

114 citations

Patent
12 Jun 1998
TL;DR: In this paper, the authors present a method and computer program product for offloading specific processing tasks that would otherwise be performed in a computer system's processor and memory, to a peripheral device, or devices, that are connected to the computer.
Abstract: The present invention is directed to a method and computer program product for offloading specific processing tasks that would otherwise be performed in a computer system's processor and memory, to a peripheral device, or devices, that are connected to the computer. The computing task is then performed by the peripheral, thereby saving computer system resources for other computing tasks and increasing the overall computing efficiency of the computer system. In one preferred embodiment, the disclosed method is utilized in a layered network model, wherein computing tasks that are typically performed in network applications are instead offloaded to the network interface card (NIC) peripheral. An application executing on the computer system first queries the processing, or task offload capabilities of the NIC, and then selectively enables those capabilities that may be subsequently needed by the application. The specific processing capabilities of a NIC are made available by creating a task offload buffer data structure, which contains data indicative of the processing capabilities of the corresponding NIC. Once an application has discerned the capabilities of a particular NIC, it will selectively utilize any of the enabled task offload capabilities of the NIC by appending packet extension data to the network data packet that is forwarded to the NIC. The device driver of the NIC will review the data contained in the packet extension, and then cause the NIC to perform the specified operating task(s). This offloading of computing tasks on a per-packet basis allows an application to selectively offload tasks on a dynamic, as-needed basis. As such, applications executing on the computer system processor are able to offload tasks in instances where it is busy processing other computing tasks and processor overhead is high. Multiple tasks can also be offloaded in batches to a particular peripheral.

114 citations

Proceedings ArticleDOI
15 Jun 2015
TL;DR: This paper considers how to best integrate container technology into an existing workflow system, using Makeflow, Work Queue, and Docker as examples of current technology.
Abstract: Workflows are a widely used abstraction for representing large scientific applications and executing them on distributed systems such as clusters, clouds, and grids. However, workflow systems have been largely silent on the question of precisely what environment each task in the workflow is expected to run in. As a result, a workflow may run correctly in the environment in which it was designed, but when moved to another machine, is highly likely to fail due to differences in the operating system, installed applications, available data, and so forth. Lightweight container technology has recently arisen as a potential solution to this problem, by providing a well-defined execution environments at the operating system level. In this paper, we consider how to best integrate container technology into an existing workflow system, using Makeflow, Work Queue, and Docker as examples of current technology. A brief performance study of Docker shows very little overhead in CPU and I/O performance, but significant costs in creating and deleting containers. Taking this into account, we describe four different methods of connecting containers to different points of the infrastructure, and explain several methods of managing the container images that must be distributed to executing tasks. We explore the performance of a large bioinformatics workload on a Docker-enabled cluster, and observe the best configuration to be locally-managed containers that are shared between multiple tasks.

113 citations

Journal ArticleDOI
TL;DR: A novel multitask evolutionary algorithm with an online dynamic resource allocation strategy that allocates resources to each task adaptively according to the requirements of tasks and an adaptive method to control the resources invested into cross-domain searching.
Abstract: Evolutionary multitasking is a recently proposed paradigm to simultaneously solve multiple tasks using a single population. Most of the existing evolutionary multitasking algorithms treat all tasks equally and then assign the same amount of resources to each task. However, when the resources are limited, it is difficult for some tasks to converge to acceptable solutions. This paper aims at investigating the resource allocation in the multitasking environment to efficiently utilize the restrictive resources. In this paper, we design a novel multitask evolutionary algorithm with an online dynamic resource allocation strategy. Specifically, the proposed dynamic resource allocation strategy allocates resources to each task adaptively according to the requirements of tasks. We also design an adaptive method to control the resources invested into cross-domain searching. The proposed algorithm is able to allocate the computational resources dynamically according to the computational complexities of tasks. The experimental results demonstrate the superiority of the proposed method in comparison with the state-of-the-art algorithms on benchmark problems of multitask optimization.

113 citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021695
2020712
2019784
2018721
2017565