scispace - formally typeset
Search or ask a question

Showing papers on "Task (computing) published in 2012"


Proceedings ArticleDOI
06 Nov 2012
TL;DR: This paper introduces a taxonomy for spatial crowdsourcing, and focuses on one class of this taxonomy, in which workers send their locations to a centralized server and thereafter the server assigns to every worker his nearby tasks with the objective of maximizing the overall number of assigned tasks.
Abstract: With the ubiquity of mobile devices, spatial crowdsourcing is emerging as a new platform, enabling spatial tasks (i.e., tasks related to a location) assigned to and performed by human workers. In this paper, for the first time we introduce a taxonomy for spatial crowdsourcing. Subsequently, we focus on one class of this taxonomy, in which workers send their locations to a centralized server and thereafter the server assigns to every worker his nearby tasks with the objective of maximizing the overall number of assigned tasks. We formally define this maximum task assignment (or MTA) problem in spatial crowdsourcing, and identify its challenges. We propose alternative solutions to address these challenges by exploiting the spatial properties of the problem space. Finally, our experimental evaluations on both real-world and synthetic data verify the applicability of our proposed approaches and compare them by measuring both the number of assigned tasks and the travel cost of the workers.

484 citations


Proceedings ArticleDOI
24 Sep 2012
TL;DR: This paper comprehensively characterize the job/task load and host load in a real-world production data center at Google Inc, using a detailed trace of over 25 million tasks across over 12,500 hosts.
Abstract: A new era of Cloud Computing has emerged, but the characteristics of Cloud load in data centers is not perfectly clear. Yet this characterization is critical for the design of novel Cloud job and resource management systems. In this paper, we comprehensively characterize the job/task load and host load in a real-world production data center at Google Inc. We use a detailed trace of over 25 million tasks across over 12,500 hosts. We study the differences between a Google data center and other Grid/HPC systems, from the perspective of both work load (w.r.t. jobs and tasks) and host load (w.r.t. machines). In particular, we study the job length, job submission frequency, and the resource utilization of jobs in the different systems, and also investigate valuable statistics of machine's maximum load, queue state and relative usage levels, with different job priorities and resource attributes. We find that the Google data center exhibits finer resource allocation with respect to CPU and memory than that of Grid/HPC systems. Google jobs are always submitted with much higher frequency and they are much shorter than Grid jobs. As such, Google host load exhibits higher variance and noise.

167 citations


Patent
21 Dec 2012
TL;DR: In this article, the authors provide a method for bank card transactions, including: reading the token information at the point of swipe for traditional and non-traditional POS platforms; performing a low-security task on the tokens using a first microprocessor, wherein the non-security tasks includes one or more tasks from the group of encryption determination, encryption-decryption request, key management, token information delivery, or transactional data delivery; and performing a security-related task on tokens information using a second microprocessor based on a request from the first microprocessors, where the security
Abstract: Systems and methods for performing financial transactions are provided. In one embodiment, the invention provides for method for bank card transactions, including: reading the token information at the point of swipe for traditional and non-traditional POS platforms; performing a low-security task on the token information using a first microprocessor, wherein the non-security task includes one or more tasks from the group of encryption determination, encryption-decryption request, key management, token information delivery, or transactional data delivery; and performing a security-related task on the token information using a second microprocessor based on a request from the first microprocessor, wherein the security-related task includes one or more tasks from the group of token information authentication, token information decryption, or token information encryption. Formatting the encrypted information such that it is compatible with the format of the current POS system.

164 citations


Journal ArticleDOI
TL;DR: A computational framework for automatic synthesis of control and communication strategies for a robotic team from task specifications that are given as regular expressions about servicing requests in an environment by using a technique inspired by linear temporal logic model checking.
Abstract: We present a computational framework for automatic synthesis of control and communication strategies for a robotic team from task specifications that are given as regular expressions about servicing requests in an environment. We assume that the location of the requests in the environment and the robot capacities and cooperation requirements to service the requests are known. Our approach is based on two main ideas. First, we extend recent results from formal synthesis of distributed systems to check for the distributability of the task specification and to generate local specifications, while accounting for the service and communication capabilities of the robots. Second, by using a technique that is inspired by linear temporal logic model checking, we generate individual control and communication strategies. We illustrate the method with experimental results in our robotic urban-like environment.

149 citations


Proceedings Article
01 Jan 2012
TL;DR: This work proposes a novel method which builds on a prior multitask methodology by favoring a shared low dimensional representation within each group of tasks, and imposes a penalty on tasks from different groups which encourages the two representations to be orthogonal.
Abstract: We study the problem of learning a group of principal tasks using a group of auxiliary tasks, unrelated to the principal ones. In many applications, joint learning of unrelated tasks which use the same input data can be beneficial. The reason is that prior knowledge about which tasks are unrelated can lead to sparser and more informative representations for each task, essentially screening out idiosyncrasies of the data distribution. We propose a novel method which builds on a prior multitask methodology by favoring a shared low dimensional representation within each group of tasks. In addition, we impose a penalty on tasks from different groups which encourages the two representations to be orthogonal. We further discuss a condition which ensures convexity of the optimization problem and argue that it can be solved by alternating minimization. We present experiments on synthetic and real data, which indicate that incorporating unrelated tasks can improve significantly over standard multi-task learning methods.

125 citations


Patent
28 Dec 2012
TL;DR: In this paper, a multitasking method and apparatus of a user device is provided for intuitively and swiftly switching between background and foreground tasks running on the user device, including receiving an interaction to request for task-switching in a state where an execution screen of a certain application is displayed, displaying a stack of tasks that are currently running, switching a task selected from the stack to a foreground task, and presenting an execution window of the foreground task.
Abstract: A multitasking method and apparatus of a user device is provided for intuitively and swiftly switching between background and foreground tasks running on the user device. The multitasking method includes receiving an interaction to request for task-switching in a state where an execution screen of a certain application is displayed, displaying a stack of tasks that are currently running, switching a task selected from the stack to a foreground task, and presenting an execution window of the foreground task.

89 citations


Proceedings ArticleDOI
TL;DR: This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays.
Abstract: This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays. What exactly is 3D good for? Relative to traditional 2D displays, stereoscopic displays have been shown to enhance performance on a variety of depth-related tasks. These tasks include judging absolute and relative distances, finding and identifying objects (by breaking camouflage and eliciting perceptual "pop-out"), performing spatial manipulations of objects (object positioning, orienting, and tracking), and navigating. More cognitively, stereoscopic displays can improve the spatial understanding of 3D scenes or objects, improve memory/recall of scenes or objects, and improve learning of spatial relationships and environments. However, for tasks that are relatively simple, that do not strictly require depth information for good performance, where other strong cues to depth can be utilized, or for depth tasks that lie outside the effective viewing volume of the display, the purported performance benefits of 3D may be small or altogether absent. Stereoscopic 3D displays come with a host of unique human factors problems including the simulator-sickness-type symptoms of eyestrain, headache, fatigue, disorientation, nausea, and malaise, which appear to effect large numbers of viewers (perhaps as many as 25% to 50% of the general population). Thus, 3D technology should be wielded delicately and applied carefully; and perhaps used only as is necessary to ensure good performance.

85 citations


Proceedings ArticleDOI
28 Feb 2012
TL;DR: CITA provides a task execution framework to automatically distribute and coordinate tasks, energy-efficient modules to infer user activities and compose them, and a push communication service for mobile devices that overcomes some shortcomings in existing push services.
Abstract: A growing class of smartphone applications are tasking applications that run continuously, process data from sensors to determine the user's context (such as location) and activity, and optionally trigger certain actions when the right conditions occur. Many such tasking applications also involve coordination between multiple users or devices. Example tasking applications include location-based reminders, changing the ring-mode of a phone automatically depending on location, notifying when friends are nearby, disabling WiFi in favor of cellular data when moving at more than a certain speed outdoors, automatically tracking and storing movement tracks when driving, and inferring the number of steps walked each day. Today, these applications are non-trivial to develop, although they are often trivial for end users to state. Additionally, simple implementations can consume excessive amounts of energy. This paper proposes Code in the Air (CITA), a system which simplifies the rapid development of tasking applications. It enables non-expert end users to easily express simple tasks on their phone, and more sophisticated developers to write code for complex tasks by writing purely server-side scripts. CITA provides a task execution framework to automatically distribute and coordinate tasks, energy-efficient modules to infer user activities and compose them, and a push communication service for mobile devices that overcomes some shortcomings in existing push services.

78 citations


Journal ArticleDOI
TL;DR: Compared standard change-detection tasks with tasks in which the objects varied in size or position between successive arrays, it is demonstrated that the visual working memory system can detect changes in object identity across spatial transformations.
Abstract: Many recent studies of visual working memory have used change-detection tasks in which subjects view sequential displays and are asked to report whether they are identical or if one object has changed. A key question is whether the memory system used to perform this task is sufficiently flexible to detect changes in object identity independent of spatial transformations, but previous research has yielded contradictory results. To address this issue, the present study compared standard change-detection tasks with tasks in which the objects varied in size or position between successive arrays. Performance was nearly identical across the standard and transformed tasks unless the task implicitly encouraged spatial encoding. These results resolve the discrepancies in previous studies and demonstrate that the visual working memory system can detect changes in object identity across spatial

78 citations


Journal ArticleDOI
TL;DR: The results show that the heuristics are fast, they obtain good results as a stand-alone method and are efficient when used as a initial solution generator or as a solution decoder within more elaborate approaches.
Abstract: We propose simple heuristics for the assembly line worker assignment and balancing problem. This problem typically occurs in assembly lines in sheltered work centers for the disabled. Different from the well-known simple assembly line balancing problem, the task execution times vary according to the assigned worker. We develop a constructive heuristic framework based on task and worker priority rules defining the order in which the tasks and workers should be assigned to the workstations. We present a number of such rules and compare their performance across three possible uses: as a stand-alone method, as an initial solution generator for meta-heuristics, and as a decoder for a hybrid genetic algorithm. Our results show that the heuristics are fast, they obtain good results as a stand-alone method and are efficient when used as a initial solution generator or as a solution decoder within more elaborate approaches.

75 citations


Proceedings Article
26 Jun 2012
TL;DR: In this paper, the joint problem of recommending items to a user with respect to a given query, which is a surprisingly common task, has been studied and a factorized model is proposed to optimize the top-ranked items returned for the given query and user.
Abstract: Retrieval tasks typically require a ranking of items given a query. Collaborative filtering tasks, on the other hand, learn to model user's preferences over items. In this paper we study the joint problem of recommending items to a user with respect to a given query, which is a surprisingly common task. This setup differs from the standard collaborative filtering one in that we are given a query × user × item tensor for training instead of the more traditional user × item matrix. Compared to document retrieval we do have a query, but we may or may not have content features (we will consider both cases) and we can also take account of the user's profile. We introduce a factorized model for this new task that optimizes the top-ranked items returned for the given query and user. We report empirical results where it outperforms several baselines.

Proceedings ArticleDOI
21 Aug 2012
TL;DR: Preliminary results of a study designed to evaluate a set of search tasks that were developed for use in IIR studies suggest that behaviors and ratings are fairly consistent with the differences one might expect among the search tasks and provide initial evidence of the usefulness of these tasks inIIR studies.
Abstract: One of the most challenging aspects of designing an interactive information retrieval (IIR) study is the development of search tasks. In this paper, we present preliminary results of a study designed to evaluate a set of search tasks that were developed for use in IIR studies. We created 20 search tasks using five levels of cognitive complexity and four domains, and conducted a laboratory evaluation of these tasks with 48 undergraduate subjects. We describe preliminary results from an analysis of data from 24 subjects for 10 search tasks. Initial results show that, in general, as cognitive complexity increased, subjects issued more queries, clicked on more search results, viewed more URLs and took more time to complete the task. Subjects' expected and experienced difficulty ratings of tasks generally increased as cognitive complexity increased with some exceptions. When subjects were asked to rank tasks according to difficulty and engagement, tasks with higher cognitive complexity were rated as more difficult than tasks with lower cognitive complexity, but not necessarily as more engaging. These preliminary results suggest that behaviors and ratings are fairly consistent with the differences one might expect among the search tasks and provide initial evidence of the usefulness of these tasks in IIR studies.

Proceedings ArticleDOI
11 Jul 2012
TL;DR: This paper presents the first real-time multiprocessor locking protocol that supports fine-grained nested resource requests and relies on a novel technique for ordering the satisfaction of resource requests to ensure a bounded duration of priority inversions for nested requests.
Abstract: This paper presents the first real-time multiprocessor locking protocol that supports fine-grained nested resource requests. This locking protocol relies on a novel technique for ordering the satisfaction of resource requests to ensure a bounded duration of priority inversions for nested requests. This technique can be applied on partitioned, clustered, and globally scheduled systems in which waiting is realized by either spinning or suspending. Furthermore, this technique can be used to construct fine-grained nested locking protocols that are efficient under spin-based, suspension-oblivious or suspension-aware analysis of priority inversions. Locking protocols built upon this technique perform no worse than coarse-grained locking mechanisms, while allowing for increased parallelism in the average case (and, depending upon the task set, better worst-case performance).

Patent
Nicolas Bruno1, Jingren Zhou1, Srikanth Kandula1, Sameer Agarwal1, Ming-Chuan Wu1 
22 Jun 2012
TL;DR: In this paper, the use of statistics collected during the parallel distributed execution of the tasks of a job may be used to optimize the performance of the task or similar recurring tasks, and the additional execution plan is subsequently optimized based at least on the statistics to produce an optimized execution plan.
Abstract: The use of statistics collected during the parallel distributed execution of the tasks of a job may be used to optimize the performance of the task or similar recurring tasks An execution plan for a job is initially generated, in which the execution plan includes tasks Statistics regarding operations performed in the tasks are collected while the tasks are executed via parallel distributed execution Another execution plan is then generated for another recurring job, in which the additional execution plan has at least one task in common with the execution plan for the job The additional execution plan is subsequently optimized based at least on the statistics to produce an optimized execution plan

Proceedings Article
01 Jan 2012
TL;DR: Zhang et al. as discussed by the authors propose a decision-theoretic controller that dynamically requests responses as necessary in order to infer answers to multiple tasks at a time on Amazon Mechanical Turk.
Abstract: To ensure quality results from crowdsourced tasks, requesters often aggregate worker responses and use one of a plethora of strategies to infer the correct answer from the set of noisy responses. However, all current models assume prior knowledge of all possible outcomes of the task. While not an unreasonable assumption for tasks that can be posited as multiple-choice questions (e.g. n-ary classification), we observe that many tasks do not naturally fit this paradigm, but instead demand a free-response formulation where the outcome space is of infinite size (e.g. audio transcription). We model such tasks with a novel probabilistic graphical model, and design and implement LAZYSUSAN, a decision-theoretic controller that dynamically requests responses as necessary in order to infer answers to these tasks. We also design an EM algorithm to jointly learn the parameters of our model while inferring the correct answers to multiple tasks at a time. Live experiments on Amazon Mechanical Turk demonstrate the superiority of LAZYSUSAN at solving SAT Math questions, eliminating 83.2% of the error and achieving greater net utility compared to the state-of-the-art strategy, majority-voting. We also show in live experiments that our EM algorithm outperforms majority-voting on a visualization task that we design.

Patent
15 Oct 2012
TL;DR: In this paper, a method and system for performing processing tasks is disclosed and a query identifying a restriction limiting which processing task is to be assigned to the resource is identified, which is transmitted to a remote queue in communication with a plurality of independent resources.
Abstract: A method and system for performing processing tasks is disclosed. At a resource, a detection is made as to when the resource is available to perform a processing task. Usage of the resource for performing processing tasks associated with each client of a set of clients is monitored. A restriction limiting which processing task is to be assigned to the resource is identified. The restriction identifies a hierarchy amongst at least two clients of the set of clients. The hierarchy is based on the monitored usage. A query identifying the restriction is generated. The query is transmitted to a remote queue in communication with a plurality of independent resources. The plurality of independent resources includes the resource. A response is received from the queue. The response identifies a processing task.

Patent
26 Jun 2012
TL;DR: In this paper, a workload associated with a task is assessed with respect to each of a plurality of computing paradigms offered by a cloud computing environment, and the workload is then assigned to available computing paradigm to be performed with improved utilization of resources.
Abstract: A workload associated with a task is assessed with respect to each of a plurality of computing paradigms offered by a cloud computing environment. Adaptive learning is employed by maintaining a table of Q-values corresponding to the computing paradigms and the workload is distributed according to a ratio of Q-values. The Q-values may be adjusted responsive to a performance metric and/or a value, reward, and/or decay function. The workload is then assigned to available computing paradigms to be performed with improved utilization of resources.

Journal ArticleDOI
TL;DR: This research examined the full range of tasks and activities that design engineers perform, how their working time is distributed among these, and how these issues influence their satisfaction with their work.

Patent
08 Oct 2012
TL;DR: In this paper, a load balancer coupled between a network and a pool of processing resources utilizes a method for performing automatic, dynamic load balancing with respect to allocating the resources to handle processing requests received over the network.
Abstract: A load balancer coupled between a network and a pool of processing resources utilizes a method for performing automatic, dynamic load balancing with respect to allocating the resources to handle processing requests received over the network. According to the method, the load balancer electronically receives registration requests from the processing resources and registers each resource from which a registration request was received. After the resources have been registered but before they are allocated to handle processing requests, the load balancer receives, from each registered resource, information relating to utilization of the resource. The utilization information may include operational metrics for the resource. Some time thereafter, the load balancer receives a request over the network to perform a server-related processing task. Responsive to the request, the load balancer allocates at least one of the registered resources to perform the requested processing task based at least on the previously received utilization information.

Proceedings ArticleDOI
05 May 2012
TL;DR: A bezel-based text entry application is designed to gain insights into how bezel menus perform in a real-world application and it is found that the participants achieved 9.2 words per minute in situations requiring minimal visual attention to the screen.
Abstract: Touchscreen phones tend to require constant visual attention, thus not allowing eyes-free interaction. For users with visual impairment, or when occupied with another task that requires a user's visual attention, these phones can be difficult to use. Recently, marks initiating from the bezel, the physical touch-insensitive frame surrounding a touchscreen display, have been proposed as a method for eyes-free interaction. Due to the physical form factor of the mobile device, it is possible to access different parts of the bezel eyes-free. In this paper, we first studied the performance of different bezel menu layouts. Based on the results, we designed a bezel-based text entry application to gain insights into how bezel menus perform in a real-world application. From a longitudinal study, we found that the participants achieved 9.2 words per minute in situations requiring minimal visual attention to the screen. After only one hour of practice, the participants transitioned from novice to expert users. This shows that bezel menus can be adopted for realistic applications.

Journal ArticleDOI
TL;DR: A design and evaluates the performance of a power consumption scheduler in smart grid homes or buildings, aiming at reducing the peak load in them as well as in the system-wide power transmission network.
Abstract: This paper presents a design and evaluates the performance of a power consumption scheduler in smart grid homes or buildings, aiming at reducing the peak load in them as well as in the system-wide power transmission network. Following the task model consist of actuation time, operation length, deadline, and a consumption profile, the scheduler linearly copies the profile entry or maps a combinatory vector to the allocation table one by one according to the task type, which can be either preemptive or nonpreemptive. The proposed scheme expands the search space recursively to traverse all the feasible allocations for a task set. A pilot implementation of this scheduling method reduces the peak load by up to 23.1% for the given task set. The execution time, basically approximated by (The equation is abbreviated), where M, N(subscript NP), and N(subscript P) are the number of time slots, nonpreemptive tasks, and preemptive tasks, respectively, is reduced almost to 2% taking advantage of an efficient constraint processing mechanism which prunes a search branch when the partial peak value already exceeds the current best. In addition, local peak reduction brings global peak reduction by up to 16% for the home-scale scheduling units without any global coordination, avoiding uncontrollable peak resonance.

Patent
Ned M. Smith1
29 Sep 2012
TL;DR: In this paper, a client device initiates a trusted task for execution within a trusted execution environment of a remote service provider, while enforcing security and/or compartmentalization context on the data/code.
Abstract: Devices, systems, and methods for conducting trusted computing tasks on a distributed computer system are described. In some embodiments, a client device initiates a trusted task for execution within a trusted execution environment of a remote service provider. The devices, systems, and methods may permit the client to evaluate the trusted execution capabilities of the service provider via a planning and attestation process, prior to sending data/code associated with the trusted task to the service provider for execution. Execution of the trusted task may be performed while enforcing security and/or compartmentalization context on the data/code. Systems and methods for managing and exchanging encryption keys are also described. Such systems and methods may be used to maintain the security of the data/code before during, and/or after the execution of the trusted task.

Proceedings ArticleDOI
24 Dec 2012
TL;DR: An efficient method for motion control of redundant robots performing multiple prioritized tasks in the presence of hard bounds on joint range, velocity, and acceleration/ torque is presented.
Abstract: We present an efficient method for motion control of redundant robots performing multiple prioritized tasks in the presence of hard bounds on joint range, velocity, and acceleration/ torque. This is an extension of our recently proposed SNS (Saturation in the Null Space) algorithm developed for single tasks. The method is defined at the level of acceleration commands and proceeds by successively discarding one at a time the commands that would exceed their bounds for a task of given priority, and reintroducing them at their saturated levels by projection in the null space of a suitable Jacobian associated to the already considered tasks. When processing all tasks in their priority order, a correct preemptive strategy is realized in this way, i.e., a task of higher priority uses in the best way the feasible robot capabilities it needs, while lower priority tasks are accommodated with the residual capability and do not interfere with the execution of higher priority tasks. The algorithm automatically integrates a multi-task least possible scaling strategy, when some ordered set of original tasks is found to be unfeasible. Simulation and experimental results on a 7-dof lightweight KUKA LWR IV robot illustrate the good performance of the method.

Patent
20 Jul 2012
TL;DR: In this article, a system and computer-implemented method for generating an optimized allocation of a plurality of tasks across a number of processors or slots for processing or execution in a distributed computing environment is presented.
Abstract: A system and computer-implemented method for generating an optimized allocation of a plurality of tasks across a plurality of processors or slots for processing or execution in a distributed computing environment. In a cloud computing environment implementing a MapReduce framework, the system and computer-implemented method may be used to schedule map or reduce tasks to processors or slots on the network such that the tasks are matched to processors or slots in a data locality aware fashion wherein the suitability of node and the characteristics of the task are accounted for using a minimum cost flow function.

Proceedings Article
03 Dec 2012
TL;DR: This work considers a setting where a very large number of related tasks with few examples from each individual task, and considers learning a small pool of shared hypotheses, which derives VC dimension generalization bounds for the model based on the number of tasks, shared hypothesis and the VC dimension of the hypotheses class.
Abstract: In this work we consider a setting where we have a very large number of related tasks with few examples from each individual task. Rather than either learning each task individually (and having a large generalization error) or learning all the tasks together using a single hypothesis (and suffering a potentially large inherent error), we consider learning a small pool of shared hypotheses. Each task is then mapped to a single hypothesis in the pool (hard association). We derive VC dimension generalization bounds for our model, based on the number of tasks, shared hypothesis and the VC dimension of the hypotheses class. We conducted experiments with both synthetic problems and sentiment of reviews, which strongly support our approach.

Proceedings ArticleDOI
19 Sep 2012
TL;DR: The semantics of these core concepts of Task-Oriented Programming are defined and the iTask3 framework, which embeds TOP in the functional programming language Clean, is presented.
Abstract: Task-Oriented Programming (TOP) is a novel programming paradigm for the construction of distributed systems where users work together on the internet. When multiple users collaborate, they need to interact with each other frequently. TOP supports the definition of tasks that react to the progress made by others. With TOP, complex multi-user interactions can be programmed in a declarative style just by defining the tasks that have to be accomplished, thus eliminating the need to worry about the implementation detail that commonly frustrates the development of applications for this domain. TOP builds on four core concepts: tasks that represent computations or work to do which have an observable value that may change over time, data sharing enabling tasks to observe each other while the work is in progress, generic type driven generation of user interaction, and special combinators for sequential and parallel task composition. The semantics of these core concepts is defined in this paper. As an example we present the iTask3 framework, which embeds TOP in the functional programming language Clean.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: This paper tries to solve manipulation tasks from point of view of the object, rather than in the context of the robot, to resolve object specific task constraints.
Abstract: Solving arbitrary manipulation tasks is a key feature for humanoid service robots. However, especially when tasks involve handling complex mechanisms or using tools, a generic action description is hard to define. Different objects require different handling methods. Therefore, we try to solve manipulation tasks from point of view of the object, rather than in the context of the robot. Action templates within the object context are introduced to resolve object specific task constraints. As part of a centralized world representation, the action templates are integrated into the planning process. This results in an intuitive way of solving manipulation tasks. The underlying architecture as well as the mechanisms are discussed within this paper. The proposed methods are evaluated in two experiments.

Patent
13 Jun 2012
TL;DR: In this paper, the authors present techniques for generating a distributed stream processing application from a given declarative description of one or more data stream processing tasks from a graph of operators.
Abstract: Techniques for generating a distributed stream processing application are provided. The techniques include obtaining a declarative description of one or more data stream processing tasks from a graph of operators, wherein the declarative description expresses at least one stream processing task, generating one or more containers that encompass a combination of one or more stream processing operators, and generating one or more execution units from the declarative description of one or more data stream processing tasks, wherein the one or more execution units are deployable across one or more distributed computing nodes, and comprise a distributed data stream processing application binary.

Patent
Huan Liu1
13 Jun 2012
TL;DR: In this paper, an alternative MapReduce implementation is provided which monitors for impending termination notices, and allows dynamic checkpointing and storing of processed portions of a map task, such that any processing which is interrupted by large scale terminations of a plurality of computing devices - such as those resulting from spot market rate fluctuations - is preserved.
Abstract: As a result of the systems and methods described herein, an alternative MapReduce implementation is provided which monitors for impending termination notices, and allows dynamic checkpointing and storing of processed portions of a map task, such that any processing which is interrupted by large scale terminations of a plurality of computing devices - such as those resulting from spot market rate fluctuations - is preserved.

Posted Content
TL;DR: LazySusan as discussed by the authors is a decision-theoretic controller that dynamically requests responses as necessary in order to infer answers to multiple tasks at a time on Amazon Mechanical Turk, and it outperforms majority-voting on a visualization task.
Abstract: To ensure quality results from crowdsourced tasks, requesters often aggregate worker responses and use one of a plethora of strategies to infer the correct answer from the set of noisy responses. However, all current models assume prior knowledge of all possible outcomes of the task. While not an unreasonable assumption for tasks that can be posited as multiple-choice questions (e.g. n-ary classification), we observe that many tasks do not naturally fit this paradigm, but instead demand a free-response formulation where the outcome space is of infinite size (e.g. audio transcription). We model such tasks with a novel probabilistic graphical model, and design and implement LazySusan, a decision-theoretic controller that dynamically requests responses as necessary in order to infer answers to these tasks. We also design an EM algorithm to jointly learn the parameters of our model while inferring the correct answers to multiple tasks at a time. Live experiments on Amazon Mechanical Turk demonstrate the superiority of LazySusan at solving SAT Math questions, eliminating 83.2% of the error and achieving greater net utility compared to the state-ofthe-art strategy, majority-voting. We also show in live experiments that our EM algorithm outperforms majority-voting on a visualization task that we design.