scispace - formally typeset
Search or ask a question

Showing papers on "Task (computing) published in 2009"


Journal ArticleDOI
TL;DR: This work presents a scalable approach to dynamically allocating a swarm of homogeneous robots to multiple tasks, which are to be performed in parallel, following a desired distribution, and employs a decentralized strategy that requires no communication among robots.
Abstract: We present a scalable approach to dynamically allocating a swarm of homogeneous robots to multiple tasks, which are to be performed in parallel, following a desired distribution. We employ a decentralized strategy that requires no communication among robots. It is based on the development of a continuous abstraction of the swarm obtained by modeling population fractions and defining the task allocation problem as the selection of rates of robot ingress and egress to and from each task. These rates are used to determine probabilities that define stochastic control policies for individual robots, which, in turn, produce the desired collective behavior. We address the problem of computing rates to achieve fast redistribution of the swarm subject to constraint(s) on switching between tasks at equilibrium. We present several formulations of this optimization problem that vary in the precedence constraints between tasks and in their dependence on the initial robot distribution. We use each formulation to optimize the rates for a scenario with four tasks and compare the resulting control policies using a simulation in which 250 robots redistribute themselves among four buildings to survey the perimeters.

263 citations


Patent
26 Jun 2009
TL;DR: In this article, a resource broker receives a request for a computing task that is to be performed from a customer, and the resource broker selects one of the cloud computing providers to perform at least a part of the computing task.
Abstract: Embodiments for interacting with cloud computing providers are disclosed. In accordance with at least one embodiment, a resource broker receives a request for a computing task that is to be performed from a customer. The resource broker selects one of the cloud computing providers to perform at least a part of the computing task. In turn, the resource broker may obtain a gain from performance of the at least one part of the computing task by the cloud computing provider.

253 citations


Proceedings ArticleDOI
04 Apr 2009
TL;DR: This paper presents a theory of behavior along the multitasking continuum, from concurrent tasks with rapid switching to sequential tasks with longer time between switching, and unifies several theoretical effects to better understand and predict multitasking behavior.
Abstract: Multitasking in user behavior can be represented along a continuum in terms of the time spent on one task before switching to another. In this paper, we present a theory of behavior along the multitasking continuum, from concurrent tasks with rapid switching to sequential tasks with longer time between switching. Our theory unifies several theoretical effects - the ACT-R cognitive architecture, the threaded cognition theory of concurrent multitasking, and the memory-for-goals theory of interruption and resumption - to better understand and predict multitasking behavior. We outline the theory and discuss how it accounts for numerous phenomena in the recent empirical literature.

243 citations


Journal ArticleDOI
25 Oct 2009
TL;DR: The Task Parallel Library is a library for .NET that makes it easy to take advantage of potential parallelism in a program and relies heavily on generics and delegate expressions to provide custom control structures expressing structured parallelism such as map-reduce in user programs.
Abstract: The Task Parallel Library (TPL) is a library for .NET that makes it easy to take advantage of potential parallelism in a program. The library relies heavily on generics and delegate expressions to provide custom control structures expressing structured parallelism such as map-reduce in user programs. The library implementation is built around the notion of a task as a finite CPU-bound computation. To capture the ubiquitous apply-to-all pattern the library also introduces the novel concept of a replicable task. Tasks and replicable tasks are assigned to threads using work stealing techniques, but unlike traditional implementations based on the THE protocol, the library uses a novel data structure called a 'duplicating queue'. A surprising feature of duplicating queues is that they have sequentially inconsistent behavior on architectures with weak memory models, but capture this non-determinism in a benign way by sometimes duplicating elements. TPL ships as part of the Microsoft Parallel Extensions for the .NET framework 4.0, and forms the foundation of Parallel LINQ queries (however, note that the productized TPL library may differ in significant ways from the basic design described in this article).

230 citations


Patent
29 Sep 2009
TL;DR: In this paper, a system level management unit generates a system processing and makes a processing request to a task allocation unit of a user-level management unit, which schedules the system processing according to a procedure of an introduced user level scheduling.
Abstract: A system-level management unit generates a system processing and makes a processing request to a task allocation unit of a user-level management unit. The task allocation unit schedules the system processing according to a procedure of an introduced user-level scheduling. A processing unit assigned to execute the system processing sends a notification of acceptability of the system processing to a main processing unit, by halts an application task in appropriate timing or when the processing of the current task is completed. When the notification is received within the time limit for execution, the system-level management unit has the processing unit start the system processing.

163 citations


Patent
07 Jan 2009
TL;DR: In this article, a characterization module within a controller executes a characterization procedure by performing page program and block erase operations on one or more NVM devices in an array and storing execution time data of the operations in a calibration table.
Abstract: Disclosed herein are systems and methods that recognize and recapture potentially unused processing time in typical page program and block erase operations in non-volatile memory (NVM) devices. In one embodiment, a characterization module within a controller executes a characterization procedure by performing page program and block erase operations on one or more NVM devices in an array and storing execution time data of the operations in a calibration table. The procedure may be executed at start-up and/or periodically so that the time values are reflective of the actual physical condition of the individual NVM devices. A task manager uses the stored time values to estimate the time needed for completing certain memory operations in its task table. Based on the estimated time for completion, the task manager assigns tasks to be executed during page program and/or block erase cycles, so that otherwise unused processing time can be utilized.

135 citations


Patent
10 Jun 2009
TL;DR: In this article, a signal indicative of an environmental condition is received at the first processor, and code associated with the environmental condition are identified based at least in part on the signal.
Abstract: Servo-related tasks are performed at a first processor in a disk drive. A signal indicative of an environmental condition is received at the first processor, and code associated with the environmental condition is identified based at least in part on the signal. A second processor in the disk drive is caused to execute the code associated with the environmental condition, and a responsive task is performed at the first processor based at least in part on the executed code associated with the environmental condition.

122 citations


Patent
Ronald P. Doyle1, David L. Kaminsky1
18 Dec 2009
TL;DR: In this article, a computing device associated with a cloud computing environment identifies a first worker cloud computing device from a group of worker Cloud computing devices with available resources sufficient to meet required resources for a highest-priority task.
Abstract: A computing device associated with a cloud computing environment identifies a first worker cloud computing device from a group of worker cloud computing devices with available resources sufficient to meet required resources for a highest-priority task associated with a computing job including a group of prioritized tasks. A determination is made as to whether an ownership conflict would result from an assignment of the highest-priority task to the first worker cloud computing device based upon ownership information associated with the computing job and ownership information associated with at least one other task assigned to the first worker cloud computing device. The highest-priority task is assigned to the first worker cloud computing device in response to determining that the ownership conflict would not result from the assignment of the highest-priority task to the first worker cloud computing device.

121 citations


Patent
30 Jun 2009
TL;DR: In this paper, a computing task is compiled for concurrent execution on a multiprocessor device, which includes multiple processors (44) that are capable of executing a first number of the PEs simultaneously.
Abstract: A computing method includes accepting a definition of a computing task (68), which includes multiple atomic Processing Elements (PEs - 76) having execution dependencies (80). Each execution dependency specifies that a respective first PE is to be executed before a respective second PE. The computing task is compiled for concurrent execution on a multiprocessor device (32), which includes multiple processors (44) that are capable of executing a first number of the PEs simultaneously, by arranging the PEs, without violating the execution dependencies, in an invocation data structure (90) including a second number of execution sequences (98) that is greater than one but does not exceed the first number. The multiprocessor device is invoked to run software code that executes the execution sequences in parallel responsively to the invocation data structure, so as to produce a result of the computing task.

105 citations


Proceedings ArticleDOI
01 Oct 2009
TL;DR: The obtained results suggest that the wall-following task, formulated as a pattern classification problem, is nonlinearly separable, a result that favors the MLP network if no memory of input patters are taken into account.
Abstract: This paper reports results of an investigation on the degree of influence of short-term memory mechanisms on the performance of neural classifiers when applied to robot navigation tasks. In particular, we deal with the well-known strategy of navigating by “wall-following”. For this purpose, four standard neural architectures (Logistic Perceptron, Multilayer Percep-tron, Mixture of Experts and Elman network) are used to associate different spatiotemporal sensory input patterns with four predetermined action categories. All stages of the experiments — data acquisition, selection and training of the architectures in a simulator and their execution on a real mobile robot — are described. The obtained results suggest that the wall-following task, formulated as a pattern classification problem, is nonlinearly separable, a result that favors the MLP network if no memory of input patters are taken into account. If short-term memory mechanisms are used, then even a linear network is able to perform the same task successfully.

90 citations


Patent
30 Sep 2009
TL;DR: In this paper, a workload request, requesting the execution of a software task on a virtual machine, is received, and the I/O requirements of the software task are matched to an optimal computer, in the computer environment, that has an optimal bandwidth capability.
Abstract: Virtual machines are provisioned computers in a computer environment based on input/output (I/O) requirements of software tasks. A workload request, requesting the execution of a software task on a virtual machine, is received. The I/O requirements of the software task are matched to an optimal computer, in the computer environment, that has an I/O bandwidth capability that best matches the I/O requirements of the software task. The software task is then routed to a virtual machine, on the optimal computer, for execution of the software task.

Journal ArticleDOI
TL;DR: This paper proposes algorithms for computing batches of medium grained tasks with deadlines in pull-style volunteer computing environments, and develops models of unreliable workers based on analysis of trace data from an actual volunteer computing project.
Abstract: Internet based volunteer computing projects such as SETI@home are currently restricted to performing coarse grained, embarrassingly parallel master-worker style tasks. This is partly due to the “pull” nature of task distribution in volunteer computing environments, where workers request tasks from the master rather than the master assigning tasks to arbitrary workers. In this paper we propose algorithms for computing batches of medium grained tasks with deadlines in pull-style volunteer computing environments. We develop models of unreliable workers based on analysis of trace data from an actual volunteer computing project. These models are used to develop algorithms for task distribution in volunteer computing systems with a high probability of meeting batch deadlines. We develop algorithms for perfectly reliable workers, computation-reliable workers and unreliable workers. Finally, we demonstrate the effectiveness of the algorithms through simulations using traces from actual volunteer computing environments.

Journal ArticleDOI
TL;DR: A metric along with a mixed integer linear model and a heuristic decomposition method are proposed to solve this new job rotation problem in the assembly line worker assignment and balancing problem.

Journal ArticleDOI
TL;DR: This paper presents some preliminary results on a small light weight user level task management library called Wool, which is based on work stealing and yields performance that is comparable to that of the Intel TBB.
Abstract: This paper presents some preliminary results on a small light weight user level task management library called Wool. The Wool task scheduler is based on work stealing. The objective of the library is to provide a reasonably convenient programming interface (in particular by not forcing the programmer to write in continuation passing style) in ordinary C while still having a very low task creation overhead. Several task scheduling systems based on work stealing exists, but they are typically either programming languages like Cilk-5 or based on C++ like the Intel TBB or C# as in the Microsoft TPL. Our main conclusions are that such a direct style interface is indeed possible and yields performance that is comparable to that of the Intel TBB.

Patent
14 Oct 2009
TL;DR: In this article, a server-based files and tasks brokerage system and method is described, in which the server pushes the first notification to the requested mobile telephonic device upon confirming a connection.
Abstract: A server-based files and tasks brokerage system and method are disclosed. In response to receiving a request from a requesting computing device, the server posts the request to a request queue. The request is for a requested mobile telephonic device to perform a task. The server posts a first notification to a first notification queue, in response to receiving the request. The server pushes the first notification to the requested device upon confirming a connection. Upon detecting a first condition of the task being for the requested device to receive a file, the server transfers the file from a file repository. Upon detecting a second condition of the task being for the requested device to send a file, the server transfers the file to the file repository. In response to confirming task completion, the server posts a second notification to a second notification queue associated with the requesting device.

Journal ArticleDOI
TL;DR: Results showed that the participants were able to adjust the degree of parallel processing as instructed in a flexible manner, and a modified version of the central capacity sharing (CCS) model was proposed that accounts also for crosstalk effects in dual tasks.
Abstract: The goal of the present study was to investigate the costs and benefits of different degrees of strategic parallel processing between two tasks In a series of experiments with the dual-task flanker paradigm, participants were either instructed to process the tasks serially or in parallel, or—in a control condition—they received no specific instruction Results showed that the participants were able to adjust the degree of parallel processing as instructed in a flexible manner Parallel processing of the two tasks repeatedly led to large costs in performance and to high crosstalk effects compared to more serial processing In spite of the costs, a moderate degree of parallel processing was preferred in the condition with no specific instruction This pattern of results was observed if the same task set was used for the two tasks, but also if different ones were applied Furthermore, a modified version of the central capacity sharing (CCS) model (Tombu and Jolicoeur in J Exp Psychol Hum Percept Perform 29:3–18, 2003) was proposed that accounts also for crosstalk effects in dual tasks The modified CCS model was then evaluated by fitting it successfully to the present data

Journal ArticleDOI
TL;DR: In this article, the authors examined preschoolers' ability to follow instructions in the presence or absence of a real dog while executing a variety of motor skills tasks, including modeling, competition, and tandem tasks.
Abstract: The purpose of this study was to examine preschoolers' (n = 11) ability to follow instructions in the presence or absence of a real dog while executing a variety of motor skills tasks. These tasks were divided into one of three general classifications: 1) Modeling Tasks: the children were asked to emulate the behavior of a model, 2) Competition Tasks: the children were asked to do the task faster than a competitor, and 3) Tandem Tasks: the children were asked to do the tasks at the same time as a co-performer. Typical and Identified (language impaired) preschool children were randomly assigned to perform five tasks of each general classification alone, with a human, with a real dog, and with a stuffed dog that was similar in size and appearance to the live dog. Two independent raters rated each child's adherence to instructions (interrater reliability = 0.99) on a 7-point scale. A significant interaction between task classification and type of co-performer revealed that in the Modeling Tasks the p...

Proceedings Article
10 May 2009
TL;DR: A novel approach is devised that learns a stochastic mapping between tasks and extends existing work on MDP homomorphisms to present theoretical guarantees for the quality of a transferred value function.
Abstract: The field of transfer learning aims to speed up learning across multiple related tasks by transferring knowledge between source and target tasks. Past work has shown that when the tasks are specified as Markov Decision Processes (MDPs), a function that maps states in the target task to similar states in the source task can be used to transfer many types of knowledge. Current approaches for autonomously learning such functions are inefficient or require domain knowledge and lack theoretical guarantees of performance. We devise a novel approach that learns a stochastic mapping between tasks. Using this mapping, we present two algorithms for autonomous transfer learning -- one that has strong convergence guarantees and another approximate method that learns online from experience. Extending existing work on MDP homomorphisms, we present theoretical guarantees for the quality of a transferred value function.

Journal ArticleDOI
TL;DR: In this paper, the authors present distributed and adaptive algorithms for motion coordination of a group of m autonomous vehicles in a convex environment with bounded velocity and must service demands whose time of arrival, location and on-site service are stochastic; the objective is to minimize the expected system time (wait plus service) of the demands.
Abstract: In this paper we present distributed and adaptive algorithms for motion coordination of a group of m autonomous vehicles. The vehicles operate in a convex environment with bounded velocity and must service demands whose time of arrival, location and on-site service are stochastic; the objective is to minimize the expected system time (wait plus service) of the demands. The general problem is known as the m-vehicle Dynamic Traveling Repairman Problem (m-DTRP). The best previously known control algorithms rely on centralized a-priori task assignment and are not robust against changes in the environment, e.g. changes in load conditions; therefore, they are of limited applicability in scenarios involving ad-hoc networks of autonomous vehicles operating in a time-varying environment. First, we present a new class of policies for the 1-DTRP problem that: (i) are provably optimal both in light- and heavy-load condition, and (ii) are adaptive, in particular, they are robust against changes in load conditions. Second, we show that partitioning policies, whereby the environment is partitioned among the vehicles and each vehicle follows a certain set of rules in its own region, are optimal in heavy-load conditions. Finally, by combining the new class of algorithms for the 1-DTRP with suitable partitioning policies, we design distributed algorithms for the m-DTRP problem that (i) are spatially distributed, scalable to large networks, and adaptive to network changes, (ii) are within a constant-factor of optimal in heavy-load conditions and stabilize the system in any load condition. Simulation results are presented and discussed.

Patent
04 Mar 2009
TL;DR: In this paper, a system and method of allocating a job submission for a computational task to a set of distributed server farms each having at least one processing entity comprising is presented.
Abstract: A system and method of allocating a job submission for a computational task to a set of distributed server farms each having at least one processing entity comprising; receiving a workload request from at least one processing entity for submission to at least one of the set of distributed server farms; using at least one or more conditions associated with the computational task for accepting or rejecting at least one of the server farms to which the job submission is to be allocated; determining a server farm that can optimize the one or more conditions; and dispatching the job submission to the server farm which optimizes the at least one of the one or more conditions associated with the computational task and used for selecting the at least one of the server farms.

Proceedings ArticleDOI
31 May 2009
TL;DR: The Borowsky-Gafni (BG) simulation is amended to result in the Extended-BG-simulation, an extension that yields a full characterization of t-resilient solvability, demonstrating the convenience that the characterization provides and proving a new equivalence result.
Abstract: A distributed task T on n processors is an input/output relation between a collection of processors' inputs and outputs. While all tasks are solvable if no processor may ever crash, the FLP result revealed that the possibility of a failure of just a single processor precludes a solution to the task of consensus. That is consensus is not solvable 1-resiliently. Yet, some nontrivial tasks are wait-free solvable, i.e. n-1-resiliently. What tasks are solvable if at most t1 and 01 and n>2, is undecidable, by a simple reduction to the undecidability of the wait-free solvability of 3-processors tasks.

Patent
29 Jun 2009
TL;DR: In this article, the authors present a system for automated execution of manual tasks executed on an application server by generating a manual action request at the application server, including at least one parameter.
Abstract: Implementations of the present disclosure provide for automation of manual tasks executed on an application server. Implementations include generating a manual action request at the application server, the manual action request including at least one parameter, transmitting the manual action request to an administrator computer, determining that an automation module corresponding to the manual action request exists within a database based on the at least one parameter, providing the automation module to the application server, and executing the automation module on the application server to resolve a task corresponding to the manual action request.

Patent
Jocelyn Luke Martin1
09 Jun 2009
TL;DR: In this article, a method and system for distributed technical computing environment for distributing technical computing tasks from a technical computing client to technical computing workers for execution of the tasks on one or more computers systems is described.
Abstract: A method and system is disclosed for providing a distributed technical computing environment for distributing technical computing tasks from a technical computing client to technical computing workers for execution of the tasks on one or more computers systems Tasks can be defined on a technical computing client, and the tasks organized into jobs The technical computing client can directly distribute tasks to one or more technical computing workers Furthermore, the technical computing client can submit tasks, or jobs comprising tasks, to an automatic task distribution mechanism that distributes the tasks automatically to one or more technical computing workers providing technical computing services The technical computing worker performs technical computing of tasks and the results of the execution of tasks may be provided to the technical computing client Data associated with the tasks is managed by a programmable interface associated with a data storage repository The interface allows the various entities of the distributed technical computing environment to access data services performable by the interface or by a file system or a database and database management system associated with the data

Patent
30 Oct 2009
TL;DR: In this article, a local agent polls a server for a task request at a polling interval scheduled by a schedule timer in accordance with a set of local agent and remote client preferences.
Abstract: Systems and methods for remote file access are disclosed. According to an embodiment, a local agent polls a server for a task request at a polling interval scheduled by a schedule timer in accordance with a set of local agent and remote client preferences. The local agent is responsible for executing a task from the task request and causing a file to be uploaded to the server. The local agent uses a task processor for polling a server, a schedule timer for controlling polling, and one or more protocol stacks, such as TCP/IP and SOAP, for communicating with the server. The local agent can also interface with a MAPI database for message delivery.

Proceedings ArticleDOI
27 Aug 2009
TL;DR: A conservative dataflow model for a task scheduled by PBS, which is a priority-based budget scheduler that additionally associates a priority with every task, is constructed and confirmed that a significantly higher guaranteed minimum throughput can be obtained with PBS instead of TDM schedulers and that a conservative bound on the guaranteed throughput of the task graph can be computed with a data flow model.
Abstract: Currently, the guaranteed throughput of a stream processing application, mapped on a multi-processor system, can be computed with a conservative dataflow model, if only time division multiplex (TDM) schedulers are applied. A TDM scheduler is a budget scheduler. Budget schedulers can be characterized by two parameters: budget and replenishment interval. This paper introduces a priority-based budget scheduler (PBS), which is a budget scheduler that additionally associates a priority with every task. PBS improves the guaranteed minimum throughput of a stream processing application compared to TDM, given the same amount of resources. We construct a conservative dataflow model for a task scheduled by PBS. This dataflow model generalizes previous work, because it is valid for a sequence of execution times instead of one execution time per task which results in an improved accuracy of the model. Given this dataflow model, we can compute the guaranteed minimum throughput of the task graph that implements the stream processing application. Experiments confirm that a significantly higher guaranteed minimum throughput of the task graph can be obtained with PBS instead of TDM schedulers and that a conservative bound on the guaranteed throughput of the task graph can be computed with a dataflow model. Furthermore, our bound on the guaranteed throughput of the task graph is accurate, if the buffer capacities in the task graph do not affect the guaranteed throughput.

Proceedings ArticleDOI
23 May 2009
TL;DR: Experimental results show that the proposed polynomial-time algorithms for energy-aware task partitioning and processing unit allocation are effective for the minimization of the overall energy consumption.
Abstract: Adopting multiple processing units to enhance the computing capability or reduce the power consumption has been widely accepted for designing modern computing systems. Such configurations impose challenges on energy efficiency in hardware and software implementations. This work targets power-aware and energy-efficient task partitioning and processing unit allocation for periodic real-time tasks on a platform with a library of applicable processing unit types. Each processing unit type has its own power consumption characteristics for maintaining its activeness and executing jobs. This paper proposes polynomial-time algorithms for energy-aware task partitioning and processing unit allocation. The proposed algorithms first decide how to assign tasks onto processing unit types to minimize the energy consumption, and then allocate processing units to fit the demands. The proposed algorithms for systems without limitation on the allocated processing units are shown with an (m+1)-approximation factor, where mis the number of the available processing unit types. For systems with limitation on the number of the allocated processing units, the proposed algorithm is shown with bounded resource augmentation on the limited number of allocated units. Experimental results show that the proposed algorithms are effective for the minimization of the overall energy consumption.

Patent
30 Apr 2009
TL;DR: In this paper, a user interface for simultaneously representing tasks and notifications in a computing device is presented, which allows a user to bring a selected task to the foreground or close the task, both by interacting with the representations of the tasks.
Abstract: A user interface for simultaneously representing tasks and notifications in a computing device. The user interface presents the tasks as reduced size representations of the output of the corresponding tasks which are continually updated. The user interface allows a user to bring a selected task to the foreground or to close the task, both by interacting with the representations of the tasks. The user interface further associates notifications with corresponding tasks by superimposing an icon of the notification on the representation of the corresponding task. The user interface orders and arranges the task representations and icons of the notifications according to certain layout rules.

Proceedings ArticleDOI
20 Jun 2009
TL;DR: This paper proposes dynamic performance tuning mechanisms that determine where and how to create speculative threads at runtime and describes the design, implementation, and evaluation of hardware and software support that takes advantage of runtime performance profiles to extract efficient speculative threads.
Abstract: In response to the emergence of multicore processors, various novel and sophisticated execution models have been introduced to fully utilize these processors. One such execution model is Thread-Level Speculation (TLS), which allows potentially dependent threads to execute speculatively in parallel. While TLS offers significant performance potential for applications that are otherwise non-parallel, extracting efficient speculative threads in the presence of complex control flow and ambiguous data dependences is a real challenge. This task is further complicated by the fact that the performance of speculative threads is often architecture-dependent, input-sensitive, and exhibits phase behaviors. Thus we propose dynamic performance tuning mechanisms that determine where and how to create speculative threads at runtime.This paper describes the design, implementation, and evaluation of hardware and software support that takes advantage of runtime performance profiles to extract efficient speculative threads. In our proposed framework, speculative threads are monitored by hardware-based performance counters and their performance impact is estimated. The creation of speculative threads is adjusted based on the estimation. This paper proposes speculative threads performance estimation techniques, that are capable of correctly determining whether speculation can improve performance for loops that corresponds to 83.8% of total loop execution time across all benchmarks. This paper also examines several dynamic performance tuning policies and finds that the best tuning policy achieves an overall speedup of 36.8%on a set of benchmarks from SPEC2000 suite, which outperforms static thread management by 9.5%.

Journal ArticleDOI
TL;DR: An evaluation framework is adopted in a set of experiments showing how the performance of the motion system can be affected by the use of different kinds of environment representations.
Abstract: The evaluation of the performance of robot motion methods and systems is still an open challenge, although substantial progress has been made in the field over the years. On the one hand, these techniques cannot be evaluated off-line, on the other hand, they are deeply influenced by the task, the environment and the specific representation chosen for it. In this paper we concentrate on "pure-motion tasks": tasks that require to move the robot from one configuration to another, either being an independent sub-task of a more complex plan or representing a goal by itself. After characterizing the goals and the tasks, we describe the commonly-used problem decomposition and different kinds of modeling that can be used, from accurate metric maps to minimalistic representations. The contribution of this paper is an evaluation framework that we adopt in a set of experiments showing how the performance of the motion system can be affected by the use of different kinds of environment representations.

Journal ArticleDOI
01 Feb 2009
TL;DR: It is shown that, in several situations, the oblivious algorithm Dynamic Clustering has scalability performance comparable to non-oblivious algorithms, which is remarkable considering that the authors' oblivious algorithm uses much less information to schedule tasks.
Abstract: Bag-of-Tasks applications are parallel applications composed of independent tasks. Examples of Bag-of-Tasks (BoT) applications include Monte Carlo simulations, massive searches (such as key breaking), image manipulation applications and data mining algorithms. This paper analyzes the scalability of Bag-of-Tasks applications running on master-slave platforms and proposes a scalability-related measure dubbed input file affinity. In this work, we also illustrate how the input file affinity, which is a characteristic of an application, can be used to improve the scalability of Bag-of-Tasks applications running on master-slave platforms. The input file affinity was considered in a new scheduling algorithm dubbed Dynamic Clustering, which is oblivious to task execution times. We compare the scalability of the Dynamic Clustering algorithm to several other algorithms, oblivious and non-oblivious to task execution times, proposed in the literature. We show in this paper that, in several situations, the oblivious algorithm Dynamic Clustering has scalability performance comparable to non-oblivious algorithms, which is remarkable considering that our oblivious algorithm uses much less information to schedule tasks.