scispace - formally typeset
Search or ask a question
Topic

Task (computing)

About: Task (computing) is a research topic. Over the lifetime, 9718 publications have been published within this topic receiving 129364 citations.


Papers
More filters
Patent
11 Dec 2007
TL;DR: In this article, historical data is collected regarding data retrieval subtasks, such as service requests, that are performed to generate such documents, such that these subtasks may be initiated preemptively at or near the outset of the associated document generation task.
Abstract: In a system in which documents are generated dynamically in response to user requests, historical data is collected regarding data retrieval subtasks, such as service requests, that are performed to generate such documents. This data is used to predict the specific subtasks that will be performed to respond to specific document requests, such that these subtasks may be initiated preemptively at or near the outset of the associated document generation task. A subtask that would ordinarily be postponed pending the outcome of a prior subtask can thereby be performed in parallel with the prior subtask, reducing document generation times. In one embodiment, the historical data is included within, or is used to generate, a mapping table that maps document generation tasks (which may correspond to specific URLs) to the data retrieval subtasks that are frequently performed within such tasks.

59 citations

Posted Content
TL;DR: In this paper, a parameter efficient transfer learning architecture, termed as PeterRec, is proposed, which can be configured on-the-fly to various downstream tasks, from cross-domain recommendations to user profile predictions.
Abstract: Inductive transfer learning has had a big impact on computer vision and NLP domains but has not been used in the area of recommender systems. Even though there has been a large body of research on generating recommendations based on modeling user-item interaction sequences, few of them attempt to represent and transfer these models for serving downstream tasks where only limited data exists. In this paper, we delve on the task of effectively learning a single user representation that can be applied to a diversity of tasks, from cross-domain recommendations to user profile predictions. Fine-tuning a large pre-trained network and adapting it to downstream tasks is an effective way to solve such tasks. However, fine-tuning is parameter inefficient considering that an entire model needs to be re-trained for every new task. To overcome this issue, we develop a parameter efficient transfer learning architecture, termed as PeterRec, which can be configured on-the-fly to various downstream tasks. Specifically, PeterRec allows the pre-trained parameters to remain unaltered during fine-tuning by injecting a series of re-learned neural networks, which are small but as expressive as learning the entire network. We perform extensive experimental ablation to show the effectiveness of the learned user representation in five downstream tasks. Moreover, we show that PeterRec performs efficient transfer learning in multiple domains, where it achieves comparable or sometimes better performance relative to fine-tuning the entire model parameters. Codes and datasets are available at this https URL.

59 citations

Patent
24 Sep 1997
TL;DR: In this article, the authors present a structure and method for implementing a configurable and scalable A/V system that enables a user to perform processes across one or more audio/video processing devices coupled together via a network.
Abstract: The system and method of the present invention provides a structure and method for implementing a configurable and scalable A/V system that enables a user to perform processes across one or more A/V processing devices coupled together via a network. In one embodiment, a plurality of configurable A/V systems are coupled via a network. At least two of the A/V systems include digital signal processors (DSPs) that. are programmable. The A/V systems also include other resources, such as data storage, synchronizers, analog to digital converters, digital to analog converters, etc., to support the variety of audio/video processing to be performed. In one embodiment, the user inputs at least one task to performed. The task is broken down into basic processing components or primitives. These primitives are defined in a processor descriptor block maintained by the system. The processor descriptor block indicates the processing requirements and distributability of the process across the network. For example, in one embodiment, the processor descriptor block identifies the number of cycles necessary to perform the process, any resource dependencies, and whether the process can be performed across multiple networked systems. A control process therefore references the process descriptor block and determines the bandwidth and resource requirements. The bandwidth and resource requirements are then compared to the device and system configurations and allocations to determine if the primitive can be performed using available bandwidth and resources in the device and devices coupled via the network.

59 citations

Patent
17 Jun 1983
TL;DR: The computer system for missile guidance comprises five parallel processors interconnected by a global bus, with each processor having its own CPU, program memory, temporary memory, and two critical variable memories as mentioned in this paper.
Abstract: The computer system for missile guidance comprises five parallel processors interconnected by a global bus; with each processor having its own CPU, program memory, temporary memory, and two critical variable memories, interconnected by a local bus. The program memory and critical variable memory are hard MNOS to survive nuclear radiation. Each processor has its own cycle time, synchronized by a master clock. In each processor, the cycle has three phases for intercommunication, task processing, and critical variable storage. Thus the critical variables are stored only after task processing is completed.

59 citations

Proceedings ArticleDOI
01 Jan 2018
TL;DR: This work proposes a new system, called FlexPS, which introduces a novel multi-stage abstraction to support flexible parallelism control and achieves significant speedups and resource saving compared with the state-of-the-art PS systems such as Petuum and Multiverso.
Abstract: As a general abstraction for coordinating the distributed storage and access of model parameters, the parameter server (PS) architecture enables distributed machine learning to handle large datasets and high dimensional models. Many systems, such as Parameter Server and Petuum, have been developed based on the PS architecture and widely used in practice. However, none of these systems supports changing parallelism during runtime, which is crucial for the efficient execution of machine learning tasks with dynamic workloads. We propose a new system, called FlexPS, which introduces a novel multi-stage abstraction to support flexible parallelism control. With the multi-stage abstraction, a machine learning task can be mapped to a series of stages and the parallelism for a stage can be set according to its workload. Optimizations such as stage scheduler, stage-aware consistency controller, and direct model transfer are proposed for the efficiency of multi-stage machine learning in FlexPS. As a general and complete PS systems, FlexPS also incorporates many optimizations that are not limited to multi-stage machine learning. We conduct extensive experiments using a variety of machine learning workloads, showing that FlexPS achieves significant speedups and resource saving compared with the state-of-the-art PS systems such as Petuum and Multiverso.

58 citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021695
2020712
2019784
2018721
2017565