scispace - formally typeset
Search or ask a question
Topic

Task (computing)

About: Task (computing) is a research topic. Over the lifetime, 9718 publications have been published within this topic receiving 129364 citations.


Papers
More filters
Patent
26 Jun 2008
TL;DR: In this paper, an adaptive semi-synchronous parallel processing system and method for flow cytometry data analysis applications is presented. But the authors do not address the problem of queue assignment in a multiprocessor system.
Abstract: There is provided an adaptive semi- synchronous parallel processing system and method, which may be adapted to various data analysis applications such as flow cytometry systems. By identifying the relationship and memory dependencies between tasks that are necessary to complete an analysis, it is possible to significantly reduce the analysis processing time by selectively executing tasks after careful assignment of tasks to one or more processor queues, where the queue assignment is based on an optimal execution strategy. Further strategies are disclosed to address optimal processing once a task undergoes computation by a computational element in a multiprocessor system. Also disclosed is a technique to perform fluorescence compensation to correct spectral overlap between different detectors in a flow cytometry system due to emission characteristics of various fluorescent dyes.

39 citations

Proceedings ArticleDOI
01 Dec 2016
TL;DR: In this paper, the authors proposed a principled multi-task learning (MTL) framework for distributed and asynchronous optimization to address the problem of high data volume and privacy in real-world machine learning applications.
Abstract: Many real-world machine learning applications involveseveral learning tasks which are inter-related. For example, in healthcare domain, we need to learn a predictive model of a certain disease for many hospitals. The models for each hospital may be different because of the inherent differences in the distributions of the patient populations. However, the models are also closely related because of the nature of the learning tasks modeling the same disease. By simultaneously learning all the tasks, multi-task learning (MTL) paradigm performs inductive knowledge transfer among tasks to improve the generalization performance. When datasets for the learning tasks are stored at different locations, it may not always be feasible to transfer the data to provide a data centralized computing environment due to various practical issues such as high data volume and privacy. In this paper, we propose a principled MTL framework for distributed and asynchronous optimization to address the aforementioned challenges. In our framework, gradient update does not wait for collecting the gradient information from all the tasks. Therefore, the proposed method is very efficient when the communication delay is too high for some task nodes. We show that many regularized MTL formulations can benefit from this framework, including the low-rank MTL for sharedsubspace learning. Empirical studies on both synthetic and realworld datasets demonstrate the efficiency and effectiveness of the proposed framework.

39 citations

Journal ArticleDOI
TL;DR: A History-based Auto-Tuning (HAT) MapReduce scheduler, which calculates the progress of tasks accurately and adapts to the continuously varying environment automatically and can significantly improve the performance of Map Reduce applications.
Abstract: In MapReduce model, a job is divided into a series of map tasks and reduce tasks. The execution time of the job is prolonged by some slow tasks seriously, especially in heterogeneous environments. To finish the slow tasks as soon as possible, current MapReduce schedulers launch a backup task on other nodes for each of the slow tasks. However, traditional MapReduce schedulers cannot detect slow tasks correctly since they cannot estimate the progress of tasks accurately (Hadoop home page http://hadoop.apache.org/ , 2011; Zaharia et al. in 8th USENIX symposium on operating systems design and implementation, ACM, New York, pp. 29---42, 2008). To solve this problem, this paper proposes a History-based Auto-Tuning (HAT) MapReduce scheduler, which calculates the progress of tasks accurately and adapts to the continuously varying environment automatically. HAT tunes the weight of each phase of a map task and a reduce task according to the value of them in history tasks and uses the accurate weights of the phases to calculate the progress of current tasks. Based on the accurate-calculated progress of tasks, HAT estimates the remaining time of tasks accurately and further launches backup tasks for the tasks that have the longest remaining time. Experimental results show that HAT can significantly improve the performance of MapReduce applications up to 37% compared with Hadoop and up to 16% compared with LATE scheduler.

39 citations

Patent
29 Aug 2002
TL;DR: A data mining system for a database management system and method and computer program product as mentioned in this paper provides improved functionality over synchronous data mining systems, and which provides features such as interruptible tasks and status output.
Abstract: A data mining system for a database management system and method and computer program product therefore, that provides improved functionality over synchronous data mining systems, and which provides features such as interruptible tasks and status output. A data mining system for a database management system comprises a plurality of data mining task objects operable to perform data mining functions, a data mining task queue table operable to maintain at least one queue to manage execution of the data mining tasks, and a data mining system task monitor operable to monitor execution of currently executing data mining tasks, examine the data mining task queue table, select at least one task for execution, dequeue data mining tasks from the data mining task queue table, and initiate execution of the dequeued tasks.

39 citations

Journal ArticleDOI
TL;DR: The results show that the proposed offloading strategy can achieve fast convergence, and the impact of user number, vehicle speed and MEC computing power on user cost is the least compared with other offloading schemes.
Abstract: With the rapid increase of vehicles, the explosive growth of data flow and the increasing shortage of spectrum resources, the performance of existing task offloading scheme is poor, and the on-board terminal can’t achieve efficient computing. Therefore, this article proposes a task offload strategy based on reinforcement learning computing in edge computing architecture of Internet of vehicles. Firstly, the system architecture of Internet of vehicles is designed. The Road Side Unit receives the vehicle data in community and transmits it to Mobile Edge Computing server for data analysis, while the control center collects all vehicle information. Then, the calculation model, communication model, interference model and privacy issues are constructed to ensure the rationality of task offloading in Internet of vehicles. Finally, the user cost function is minimized as objective function, and double-layer deep Q-network in deep reinforcement learning algorithm is used to solve the problem for real-time change of network state caused by user movement. The results show that the proposed offloading strategy can achieve fast convergence. Besides, the impact of user number, vehicle speed and MEC computing power on user cost is the least compared with other offloading schemes. The task offloading rate of our proposed strategy is the highest with better performance, which is more suitable for the scenario of Internet of vehicles.

39 citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021695
2020712
2019784
2018721
2017565