scispace - formally typeset
Search or ask a question
Topic

Task (computing)

About: Task (computing) is a research topic. Over the lifetime, 9718 publications have been published within this topic receiving 129364 citations.


Papers
More filters
Patent
06 Oct 1997
TL;DR: In this paper, an intelligent agent executes tasks by using intelligent agent learning modules which store information necessary to execute the tasks, such as a command to execute a task or a data which causes a task request to be generated.
Abstract: An intelligent agent executes tasks by using intelligent agent learning modules which store information necessary to execute the tasks. A computer receives a command to execute a task or receives data which causes a task request to be generated. The computer accesses appropriate information in the learning modules to execute the task, and outputs instructions for output devices to execute the tasks. The tasks may be executed at a future time and on a periodic basis. The learning modules build up a database of information from previously executed tasks, and the database is used to assist in executing future tasks. The tasks include physical commercial transactions. Portions of the intelligent agent may be remotely located and interconnected via remote communication devices.

541 citations

Proceedings ArticleDOI
01 Sep 2017
TL;DR: The authors introduce a joint many-task model together with a strategy for successively growing its depth to solve increasingly complex tasks, and use a simple regularization term to allow for optimizing all model weights to improve one task's loss without exhibiting catastrophic interference of the other tasks.
Abstract: Transfer and multi-task learning have traditionally focused on either a single source-target pair or very few, similar tasks. Ideally, the linguistic levels of morphology, syntax and semantics would benefit each other by being trained in a single model. We introduce a joint many-task model together with a strategy for successively growing its depth to solve increasingly complex tasks. Higher layers include shortcut connections to lower-level task predictions to reflect linguistic hierarchies. We use a simple regularization term to allow for optimizing all model weights to improve one task’s loss without exhibiting catastrophic interference of the other tasks. Our single end-to-end model obtains state-of-the-art or competitive results on five different tasks from tagging, parsing, relatedness, and entailment tasks.

541 citations

Journal ArticleDOI
TL;DR: It is shown that attention-aware systems could mitigate effects of interruption by deferring presentation of peripheral information until coarse boundaries are reached during task execution, which can lead to a large mitigation of disruption.

536 citations

Patent
Gonzalo Ramos1, Ken Hinckley1
31 Oct 2005
TL;DR: In this article, a first gesture input is received at a first mobile computing device and then a second gesture input at a second mobile computing devices is made as to whether the second gesture is accepted at the initiating device, if it is determined that the second gestures inputs is accepted, then resources of the devices are combined to jointly execute a particular task associated with the shared resources.
Abstract: Methods and apparatus of the various embodiments allow the coordination of resources of devices to jointly execute tasks or perform actions on one of the devices. In the method, a first gesture input is received at a first mobile computing device. A second gesture input is received at a second mobile computing device. In response, a determination is made as to whether the second gesture is accepted at the initiating device. If it is determined that the second gesture inputs is accepted, then resources of the devices are combined to jointly execute a particular task associated with the shared resources.

533 citations

Journal ArticleDOI
TL;DR: It is argued that for many common machine learning problems, although in general the authors do not know the true (objective) prior for the problem, they do have some idea of a set of possible priors to which the true prior belongs.
Abstract: A Bayesian model of learning to learn by sampling from multiple tasks is presented. The multiple tasks are themselves generated by sampling from a distribution over an environment of related tasks. Such an environment is shown to be naturally modelled within a Bayesian context by the concept of an objective prior distribution. It is argued that for many common machine learning problems, although in general we do not know the true (objective) prior for the problem, we do have some idea of a set of possible priors to which the true prior belongs. It is shown that under these circumstances a learner can use Bayesian inference to learn the true prior by learning sufficiently many tasks from the environment. In addition, bounds are given on the amount of information required to learn a task when it is simultaneously learnt with several other tasks. The bounds show that if the learner has little knowledge of the true prior, but the dimensionality of the true prior is small, then sampling multiple tasks is highly advantageous. The theory is applied to the problem of learning a common feature set or equivalently a low-dimensional-representation (LDR) for an environment of related tasks.

496 citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021695
2020712
2019784
2018721
2017565