scispace - formally typeset
Search or ask a question
Topic

Task (computing)

About: Task (computing) is a research topic. Over the lifetime, 9718 publications have been published within this topic receiving 129364 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This work addresses the scheduling of dynamically arriving real-time tasks with PB fault-tolerant requirements onto a set of processors in such a way that the versions of the tasks are feasible in the schedule.
Abstract: Many time-critical applications require dynamic scheduling with predictable performance. Tasks corresponding to these applications have deadlines to be met despite the presence of faults. In this paper, we propose an algorithm to dynamically schedule arriving real-time tasks with resource and fault-tolerant requirements on to multiprocessor systems. The tasks are assumed to be nonpreemptable and each task has two copies (versions) which are mutually excluded in space, as well as in time in the schedule, to handle permanent processor failures and to obtain better performance, respectively. Our algorithm can tolerate more than one fault at a time, and employs performance improving techniques such as 1) distance concept which decides the relative position of the two copies of a task in the task queue, 2) flexible backup overloading, which introduces a trade-off between degree of fault tolerance and performance, and 3) resource reclaiming, which reclaims resources both from deallocated backups and early completing tasks. We quantify, through simulation studies, the effectiveness of each of these techniques in improving the guarantee ratio, which is defined as the percentage of total tasks, arrived in the system, whose deadlines are met. Also, we compare through simulation studies the performance our algorithm with a best known algorithm for the problem, and show analytically the importance of distance parameter in fault-tolerant dynamic scheduling in multiprocessor real-time systems.

167 citations

Proceedings ArticleDOI
24 Sep 2012
TL;DR: This paper comprehensively characterize the job/task load and host load in a real-world production data center at Google Inc, using a detailed trace of over 25 million tasks across over 12,500 hosts.
Abstract: A new era of Cloud Computing has emerged, but the characteristics of Cloud load in data centers is not perfectly clear. Yet this characterization is critical for the design of novel Cloud job and resource management systems. In this paper, we comprehensively characterize the job/task load and host load in a real-world production data center at Google Inc. We use a detailed trace of over 25 million tasks across over 12,500 hosts. We study the differences between a Google data center and other Grid/HPC systems, from the perspective of both work load (w.r.t. jobs and tasks) and host load (w.r.t. machines). In particular, we study the job length, job submission frequency, and the resource utilization of jobs in the different systems, and also investigate valuable statistics of machine's maximum load, queue state and relative usage levels, with different job priorities and resource attributes. We find that the Google data center exhibits finer resource allocation with respect to CPU and memory than that of Grid/HPC systems. Google jobs are always submitted with much higher frequency and they are much shorter than Grid jobs. As such, Google host load exhibits higher variance and noise.

167 citations

Proceedings ArticleDOI
18 Nov 2002
TL;DR: A simulation-based evaluation of an initial MSSP implementation achieves speedups of up to 1.7 (harmonic mean 1.25) on the SPEC2000 integer benchmarks.
Abstract: Master/Slave Speculative Parallelization (MSSP) is an execution paradigm for improving the execution rate of sequential programs by parallelizing them speculatively for execution on a multiprocessor. In MSSP one processor - the master - executes an approximate version of the program to compute selected values that the full program's execution is expected to compute. The master's results are checked by slave processors that execute the original program. This validation is parallelized by cutting the program's execution into tasks. Each slave uses its predicted inputs (as computed by the master) to validate the input predictions of the next task, inductively validating the entire execution. The performance of MSSP is largely determined by the execution rate of the approximate program. Since approximate code has no correctness requirements (in essence it is a software value predictor), it can be optimized more effectively than traditionally generated code. It is free to sacrifice correctness in the uncommon case to maximize performance in the common case. A simulation-based evaluation of an initial MSSP implementation achieves speedups of up to 1.7 (harmonic mean 1.25) on the SPEC2000 integer benchmarks. Performance is currently limited by the effectiveness with which our current automated infrastructure approximates programs, which can likely be improved significantly.

166 citations

Proceedings ArticleDOI
17 Aug 2014
TL;DR: An adaptive measurement framework, called DREAM, that dynamically adjusts the resources devoted to each measurement task, while ensuring a user-specified level of accuracy, is described.
Abstract: Software-defined networks can enable a variety of concurrent, dynamically instantiated, measurement tasks, that provide fine-grain visibility into network traffic. Recently, there have been many proposals to configure TCAM counters in hardware switches to monitor traffic. However, the TCAM memory at switches is fundamentally limited and the accuracy of the measurement tasks is a function of the resources devoted to them on each switch. This paper describes an adaptive measurement framework, called DREAM, that dynamically adjusts the resources devoted to each measurement task, while ensuring a user-specified level of accuracy. Since the trade-off between resource usage and accuracy can depend upon the type of tasks, their parameters, and traffic characteristics, DREAM does not assume an a priori characterization of this trade-off, but instead dynamically searches for a resource allocation that is sufficient to achieve a desired level of accuracy. A prototype implementation and simulations with three network-wide measurement tasks (heavy hitter, hierarchical heavy hitter and change detection) and diverse traffic show that DREAM can support more concurrent tasks with higher accuracy than several other alternatives.

166 citations

Patent
Dale Boss1, David G. Hicks1
13 Jun 1995
TL;DR: In this article, the authors present a method and apparatus for task-based application sharing in a graphic user interface such as Windows® for Windows®, where a host user designates an application to be shared, referred to as a shared applications, and another user at a remote location shares control of the shared application.
Abstract: Method and apparatus for task based application sharing in a graphic user interface such as for Windows® are provided. A host user designates an application to be shared, referred to as a shared applications (14 in Fig. 2). Another user at a remote location, referred to as the client user, shares control of the shared application (16 in Fig. 2). The shared application runs and executes only on the host system. The client system has a rectangular area on the display screen within which all shared applications are displayed (Fig. 7). The client system renders an image of all windows of a shared application including pop-up dialogs and menus without also displaying unshared applications. Further, both the client and the host users continue to perform normal operations outside of the shared rectangular area, and the host user defines the tasks which are to be shared.

165 citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021695
2020712
2019784
2018721
2017565