scispace - formally typeset
Search or ask a question
Topic

Task (computing)

About: Task (computing) is a research topic. Over the lifetime, 9718 publications have been published within this topic receiving 129364 citations.


Papers
More filters
Patent
Franck R. Diard1, Hassane S. Azar1
21 Feb 2006
TL;DR: In this paper, a system for processing video data includes a host processor, a first media processing device coupled to a first buffer, and a second Media Processing Device (MPD) coupled with a second buffer.
Abstract: A system for processing video data includes a host processor, a first media processing device coupled to a first buffer, the first media processing device configured to perform a first processing task on a frame of video data, and a second media processing device coupled to a second buffer, the second media processing device configured to perform a second processing task on the processed frame of video data The architecture allows the two devices to have asymmetric video processing capabilities Thus, the first device may advantageously perform a first task, such as decoding, while the second device performs a second task, such as post processing, according to the respective capabilities of each device, thereby increasing processing efficiency relative to prior art systems Further, one driver may be used for both devices, enabling applications to take advantage of the system's accelerated processing capabilities without requiring code changes

60 citations

Patent
23 Sep 1970
TL;DR: In this paper, a method of activity implementation through use of automatic computation means whereby simultaneous execution of program tasks may be performed and maximum utilization of system facilities is realized, where the overall activity is partitioned into separate tasks with input requirements, desired outputs, and execution priority specified for each task.
Abstract: A method of activity implementation through use of automatic computation means whereby simultaneous execution of program tasks may be performed and maximum utilization of system facilities is realized. The overall activity is partitioned into separate tasks with input requirements, desired outputs, and execution priority specified for each task. As tasks receive ready status, they are assigned to appropriate system work stations for implementation. Each task is defined by a program control instruction and may include a plurality of sub-tasks which are performed within the system.

60 citations

Journal ArticleDOI
TL;DR: This paper proposes an approach to automatically generate at run time a functional configuration of a network robot system to perform a given task in a given environment, and to dynamically change this configuration in response to failures, based on artificial intelligence planning techniques.

59 citations

Proceedings ArticleDOI
01 Aug 2021
TL;DR: In this article, a two-stream multi-modal Transformer encoder is proposed to model the interaction among text, layout, and image in a single multimodal framework.
Abstract: Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks due to its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents. We propose LayoutLMv2 architecture with new pre-training tasks to model the interaction among text, layout, and image in a single multi-modal framework. Specifically, with a two-stream multi-modal Transformer encoder, LayoutLMv2 uses not only the existing masked visual-language modeling task but also the new text-image alignment and text-image matching tasks, which make it better capture the cross-modality interaction in the pre-training stage. Meanwhile, it also integrates a spatial-aware self-attention mechanism into the Transformer architecture so that the model can fully understand the relative positional relationship among different text blocks. Experiment results show that LayoutLMv2 outperforms LayoutLM by a large margin and achieves new state-of-the-art results on a wide variety of downstream visually-rich document understanding tasks, including FUNSD (0.7895 to 0.8420), CORD (0.9493 to 0.9601), SROIE (0.9524 to 0.9781), Kleister-NDA (0.8340 to 0.8520), RVL-CDIP (0.9443 to 0.9564), and DocVQA (0.7295 to 0.8672).

59 citations

Patent
06 Mar 2007
TL;DR: In this article, a preemptive neural network database load balancer is configured to observe, learn and predict the resource utilization that given incoming tasks utilize, allowing for efficient execution and use of system resources.
Abstract: A preemptive neural network database load balancer configured to observe, learn and predict the resource utilization that given incoming tasks utilize. Allows for efficient execution and use of system resources. Preemptively assigns incoming tasks to particular servers based on predicted CPU, memory, disk and network utilization for the incoming tasks. Direct write-based tasks to a master server and utilizes slave servers to handle read-based tasks. Read-base tasks are analyzed with a neural network to learn and predict the amount of resources that tasks will utilize. Tasks are assigned to a database server based on the predicted utilization of the incoming task and the predicted and observed resource utilization on each database server. The predicted resource utilization may be updated over time as the number of records, lookups, images, PDFs, fields, BLOBs and width of fields in the database change over time.

59 citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021695
2020712
2019784
2018721
2017565