scispace - formally typeset
Search or ask a question
Topic

Task (computing)

About: Task (computing) is a research topic. Over the lifetime, 9718 publications have been published within this topic receiving 129364 citations.


Papers
More filters
Posted Content
TL;DR: It is shown that tasks with less data benefit largely from joint training with other tasks, while performance on large tasks degrades only slightly if at all, and that adding a block to the model never hurts performance and in most cases improves it on all tasks.
Abstract: Deep learning yields great results across many fields, from speech recognition, image classification, to translation. But for each problem, getting a deep model to work well involves research into the architecture and a long period of tuning. We present a single model that yields good results on a number of problems spanning multiple domains. In particular, this single model is trained concurrently on ImageNet, multiple translation tasks, image captioning (COCO dataset), a speech recognition corpus, and an English parsing task. Our model architecture incorporates building blocks from multiple domains. It contains convolutional layers, an attention mechanism, and sparsely-gated layers. Each of these computational blocks is crucial for a subset of the tasks we train on. Interestingly, even if a block is not crucial for a task, we observe that adding it never hurts performance and in most cases improves it on all tasks. We also show that tasks with less data benefit largely from joint training with other tasks, while performance on large tasks degrades only slightly if at all.

309 citations

Posted Content
TL;DR: This work empirically analyze the effectiveness of a very small episodic memory in a CL setup where each training example is only seen once and finds that repetitive training on even tiny memories of past tasks does not harm generalization, on the contrary, it improves it.
Abstract: In continual learning (CL), an agent learns from a stream of tasks leveraging prior experience to transfer knowledge to future tasks. It is an ideal framework to decrease the amount of supervision in the existing learning algorithms. But for a successful knowledge transfer, the learner needs to remember how to perform previous tasks. One way to endow the learner the ability to perform tasks seen in the past is to store a small memory, dubbed episodic memory, that stores few examples from previous tasks and then to replay these examples when training for future tasks. In this work, we empirically analyze the effectiveness of a very small episodic memory in a CL setup where each training example is only seen once. Surprisingly, across four rather different supervised learning benchmarks adapted to CL, a very simple baseline, that jointly trains on both examples from the current task as well as examples stored in the episodic memory, significantly outperforms specifically designed CL approaches with and without episodic memory. Interestingly, we find that repetitive training on even tiny memories of past tasks does not harm generalization, on the contrary, it improves it, with gains between 7\% and 17\% when the memory is populated with a single example per class.

300 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a lazy task creation method for a parallel implementation of Scheme called Mul-T that combines parallel tasks dynamically at runtime, based on a load-based inlining method.
Abstract: When a parallel algorithm is written naturally, the resulting program often produces tasks of a finer grain than an implementation can exploit efficiently. Two solutions to the granularity problem that combine parallel tasks dynamically at runtime are discussed. The simpler load-based inlining method, in which tasks are combined based on dynamic bad level, is rejected in favor of the safer and more robust lazy task creation method, in which tasks are created only retroactively as processing results become available. The strategies grew out of work on Mul-T, an efficient parallel implementation of Scheme, but could be used with other languages as well. Mul-T implementations of lazy task creation are described for two contrasting machines, and performance statistics that show the method's effectiveness are presented. Lazy task creation is shown to allow efficient execution of naturally expressed algorithms of a substantially finer grain than possible with previous parallel Lisp systems. >

300 citations

Proceedings ArticleDOI
18 Jun 2018
TL;DR: This paper proposes a novel multi-task guided prediction-and-distillation network (PAD-Net), which first predicts a set of intermediate auxiliary tasks ranging from low level to high level, and then the predictions from these intermediate Auxiliary tasks are utilized as multi-modal input via the authors' proposed multi- modal distillation modules for the final tasks.
Abstract: Depth estimation and scene parsing are two particularly important tasks in visual scene understanding. In this paper we tackle the problem of simultaneous depth estimation and scene parsing in a joint CNN. The task can be typically treated as a deep multi-task learning problem [42]. Different from previous methods directly optimizing multiple tasks given the input training data, this paper proposes a novel multi-task guided prediction-and-distillation network (PAD-Net), which first predicts a set of intermediate auxiliary tasks ranging from low level to high level, and then the predictions from these intermediate auxiliary tasks are utilized as multi-modal input via our proposed multi-modal distillation modules for the final tasks. During the joint learning, the intermediate tasks not only act as supervision for learning more robust deep representations but also provide rich multi-modal information for improving the final tasks. Extensive experiments are conducted on two challenging datasets (i.e. NYUD-v2 and Cityscapes) for both the depth estimation and scene parsing tasks, demonstrating the effectiveness of the proposed approach.

288 citations

Proceedings ArticleDOI
10 Jan 2005
TL;DR: A novel Publisher-Subscriber architecture for collecting and processing users' activity data is presented, several different user interfaces tried with TaskTracer are described, and the possibility of applying machine learning techniques to recognize/predict users' tasks is discussed.
Abstract: This paper reports on TaskTracer --- a software system being designed to help highly multitasking knowledge workers rapidly locate, discover, and reuse past processes they used to successfully complete tasks. The system monitors users' interaction with a computer, collects detailed records of users' activities and resources accessed, associates (automatically or with users' assistance) each interaction event with a particular task, enables users to access records of past activities and quickly restore task contexts. We present a novel Publisher-Subscriber architecture for collecting and processing users' activity data, describe several different user interfaces tried with TaskTracer, and discuss the possibility of applying machine learning techniques to recognize/predict users' tasks.

280 citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021695
2020712
2019784
2018721
2017565