scispace - formally typeset
Search or ask a question

Showing papers on "Task (computing) published in 2011"


Book
26 Aug 2011
TL;DR: "Also published as a working paper by the Intelligent Systems Laboratory, Xerox Palo Alto Research Center, October 1983."
Abstract: "Also published as a working paper by the Intelligent Systems Laboratory, Xerox Palo Alto Research Center, October 1983."

206 citations


01 Jul 2011
TL;DR: The Multi-Attribute Task Battery (MAT Battery), a computer-based task designed to evaluate operator performance and workload, has been redeveloped to operate in Windows XP Service Pack 3, Windows Vista and Windows 7 operating systems.
Abstract: The Multi-Attribute Task Battery (MAT Battery). is a computer-based task designed to evaluate operator performance and workload, has been redeveloped to operate in Windows XP Service Pack 3, Windows Vista and Windows 7 operating systems.MATB-II includes essentially the same tasks as the original MAT Battery, plus new configuration options including a graphical user interface for controlling modes of operation. MATB-II can be executed either in training or testing mode, as defined by the MATB-II configuration file. The configuration file also allows set up of the default timeouts for the tasks, the flow rates of the pumps and tank levels of the Resource Management (RESMAN) task. MATB-II comes with a default event file that an experimenter can modify and adapt

136 citations


Patent
22 Feb 2011
TL;DR: In this paper, a system, method, and computer program product for processing data are disclosed, which includes a data processing framework, a plurality of database systems coupled to the data-processing framework, and a storage component in communication with the dataprocessing framework and the plurality database systems, configured to store information about each partition of the data processing task being processed.
Abstract: A system, method, and computer program product for processing data are disclosed The system includes a data processing framework configured to receive a data processing task for processing, a plurality of database systems coupled to the data processing framework, wherein the database systems are configured to perform a data processing task, and a storage component in communication with the data processing framework and the plurality database systems, configured to store information about each partition of the data processing task being processed by each database system and the data processing framework The data processing task is configured to be partitioned into a plurality of partitions and each database system is configured to process a partition of the data processing task assigned for processing to that database system Each database system is configured to perform processing of its assigned partition of the data processing task in parallel with another database system processing another partition of the data processing task assigned to the another database system The data processing framework is configured to perform at least one partition of the data processing task

130 citations


Patent
18 Jul 2011
TL;DR: In this paper, a system for parallel execution of database queries over one or more Central Processing Units (CPUs) and one or multiple Multi-Core Processors (MCPs) is described.
Abstract: The invention relates to a system for parallel execution of database queries over one or more Central Processing Units (CPUs), and one or more Multi Core Processors, (MCPs), the system comprises: (a) a query analyzer for dividing the query to plurality of sub- queries, and for computing and assigning to each sub-query a target address of either a CPU of an MCP; (b) a query compiler for creating an Abstract Syntax Tree (AST) and OpenCL primitives only for those sub-queries that are targeted to an MCP, and for conveying both the remaining sub-queries, and the AST and the OpenCL code to a virtual machine, and (A) a Virtual Machine (VM) which comprises: a task bank, a buffers; a scheduler. The virtual machine combines said sub-query results by the CPUs and said primitive results by said MCPs to a final query result.

127 citations


Journal ArticleDOI
TL;DR: Simulation results show how the proposed algorithm allows the agents and tasks to self-organize into independent coalitions, while improving the performance, in terms of average player (agent or task) payoff, of at least 30.26 percent relatively to a scheme that allocates nearby tasks equally among agents.
Abstract: Autonomous wireless agents such as unmanned aerial vehicles, mobile base stations, cognitive devices, or self-operating wireless nodes present a great potential for deployment in next-generation wireless networks. While current literature has been mainly focused on the use of agents within robotics or software engineering applications, this paper proposes a novel usage model for self-organizing agents suitable for wireless communication networks. In the proposed model, a number of agents are required to collect data from several arbitrarily located tasks. Each task represents a queue of packets that require collection and subsequent wireless transmission by the agents to a central receiver. The problem is modeled as a hedonic coalition formation game between the agents and the tasks that interact in order to form disjoint coalitions. Each formed coalition is modeled as a polling system consisting of a number of agents, designated as collectors, which move between the different tasks present in the coalition, collect and transmit the packets. Within each coalition, some agents might also take the role of a relay for improving the packet success rate of the transmission. The proposed hedonic coalition formation algorithm allows the tasks and the agents to take distributed decisions to join or leave a coalition, based on the achieved benefit in terms of effective throughput, and the cost in terms of polling system delay. As a result of these decisions, the agents and tasks structure themselves into independent disjoint coalitions which constitute a Nash-stable network partition. Moreover, the proposed coalition formation algorithm allows the agents and tasks to adapt the topology to environmental changes, such as the arrival of new tasks, the removal of existing tasks, or the mobility of the tasks. Simulation results show how the proposed algorithm allows the agents and tasks to self-organize into independent coalitions, while improving the performance, in terms of average player (agent or task) payoff, of at least 30.26 percent (for a network of five agents with up to 25 tasks) relatively to a scheme that allocates nearby tasks equally among agents.

126 citations


Proceedings ArticleDOI
29 Nov 2011
TL;DR: The experiments demonstrate that the response times of high-priority GPGPU tasks can be protected under RGEM, whereas their response times increase in an unbounded fashion without RGEM support, as the data sizes of competing workload increase.
Abstract: General-purpose computing on graphics processing units, also known as GPGPU, is a burgeoning technique to enhance the computation of parallel programs. Applying this technique to real-time applications, however, requires additional support for timeliness of execution. In particular, the non-preemptive nature of GPGPU, associated with copying data to/from the device memory and launching code onto the device, needs to be managed in a timely manner. In this paper, we present a responsive GPGPU execution model (RGEM), which is a user-space runtime solution to protect the response times of high-priority GPGPU tasks from competing workload. RGEM splits a memory-copy transaction into multiple chunks so that preemption points appear at chunk boundaries. It also ensures that only the highest-priority GPGPU task launches code onto the device at any given time, to avoid performance interference caused by concurrent launches. A prototype implementation of an RGEM-based CUDA runtime engine is provided to evaluate the real-world impact of RGEM. Our experiments demonstrate that the response times of high-priority GPGPU tasks can be protected under RGEM, whereas their response times increase in an unbounded fashion without RGEM support, as the data sizes of competing workload increase.

125 citations


Journal ArticleDOI
TL;DR: An algorithm based on beam search for solving the ALWABP with the objective of minimizing the cycle time when given a fixed number of work stations, respectively, workers is introduced.

108 citations


Patent
Navendu Jain1
28 Jun 2011
TL;DR: An elastic scaling cloud-hosted batch application system and method that performs automated elastic scaling of the number of compute instances used to process batch applications in a cloud computing environment is presented in this paper.
Abstract: An elastic scaling cloud-hosted batch application system and method that performs automated elastic scaling of the number of compute instances used to process batch applications in a cloud computing environment. The system and method use automated elastic scaling to minimize job completion time and monetary cost of resources. Embodiments of the system and method use a workload-driven approach to estimate a work volume to be performed. This is based on task arrivals and job execution times. Given the work volume estimate, an adaptive controller dynamically adapts the number of compute instances to minimize the cost and completion time. Embodiments of the system and method also mitigate startup delays by computing a work volume in the near future and gradually starting up additional compute instances before they are needed. Embodiments of the system and method also ensure fairness among batch applications and concurrently executing jobs.

90 citations


Journal ArticleDOI
TL;DR: Preliminary results on a noise-free dataset of ten surgical procedures show that it is possible to recognize surgical high-level tasks with detection accuracies up to 90%.

87 citations


Proceedings ArticleDOI
07 May 2011
TL;DR: Deep Shot is created, a framework for capturing the user's work state that is needed for a task and resuming it on a different device and provides a concise API for developers to leverage its services and make their application states migratable.
Abstract: A user task often spans multiple heterogeneous devices, e.g., working on a PC in the office and continuing the work on a laptop or a mobile phone while commuting on a shuttle. However, there is a lack of support for users to easily migrate their tasks across devices. To address this problem, we created Deep Shot, a framework for capturing the user's work state that is needed for a task (e.g., the specific part of a webpage being viewed) and resuming it on a different device. In particular, Deep Shot supports two novel and intuitive interaction techniques, deep shooting and deep posting, for pulling and pushing work states, respectively, using a mobile phone camera. In addition, Deep Shot provides a concise API for developers to leverage its services and make their application states migratable. We demonstrated that Deep Shot can be used to support a range of everyday tasks migrating across devices. An evaluation consisting of a series of experiments showed that our framework and techniques are feasible.

84 citations


Proceedings ArticleDOI
16 Nov 2011
TL;DR: A method to model the memory access patterns of a task and applies this model to analyze the worst-case response time for a set of tasks, and compares the work against an existing approach and shows that this approach outperforms it by providing tighter upper-bound on the number of bus requests generated by a task.
Abstract: The current industry trend is towards using Commercially available Off-The-Shelf (COTS) based multicores for developing real-time embedded systems, as opposed to the usage of custom-made hardware. In typical implementation of such COTS-based multicores, multiple cores access the main memory via a shared bus. This often leads to contention on this shared channel, which results in an increase of the response time of the tasks. Analyzing this increased response time, considering the contention on the shared bus, is challenging on COTS-based systems mainly because bus arbitration protocols are often undocumented and the exact instants at which the shared bus is accessed by tasks are not explicitly controlled by the operating system scheduler, they are instead a result of cache misses. This paper makes three contributions towards analyzing tasks scheduled on COTS-based multicores. Firstly, we describe a method to model the memory access patterns of a task. Secondly, we apply this model to analyze the worst-case response time for a set of tasks. Although the required parameters to obtain the request profile can be obtained by static analysis, we provide an alternative method to experimentally obtain them by using performance monitoring counters (PMCs). We also compare our work against an existing approach and show that our approach outperforms it by providing tighter upper-bound on the number of bus requests generated by a task.

Journal ArticleDOI
TL;DR: A real-time software architecture description language, named Prelude, is introduced, which is built upon the synchronous languages and which provides a high level of abstraction for describing the functional and the real- time architecture of a multi-periodic control system.
Abstract: This article presents a complete scheme for the integration and the development of multi-periodic critical embedded systems. A system is formally specified as a modular and hierarchical assembly of several locally mono-periodic synchronous functions into a globally multi-periodic synchronous system. To support this, we introduce a real-time software architecture description language, named Prelude, which is built upon the synchronous languages and which provides a high level of abstraction for describing the functional and the real-time architecture of a multi-periodic control system. A program is translated into a set of real-time tasks that can be executed on a monoprocessor real-time platform with an on-line priority-based scheduler such as Deadline-Monotonic or Earliest-Deadline-First. The compilation is formally proved correct, meaning that the generated code respects the real-time semantics of the original program (respect of periods, deadlines, release dates and precedences) as well as its functional semantics (respect of variable consumption).

07 Aug 2011
TL;DR: In this article, a distributed algorithm for multi-agent task allocation problems where the sets of tasks and agents constantly change over time is proposed. But this algorithm is not suitable for large-scale problems with hundreds of agents and tasks.
Abstract: We introduce a novel distributed algorithm for multi-agent task allocation problems where the sets of tasks and agents constantly change over time. We build on an existing anytime algorithm (fast-max-sum), and give it significant new capabilities: namely, an online pruning procedure that simplifies the problem, and a branch-and-bound technique that reduces the search space. This allows us to scale to problems with hundreds of tasks and agents. We empirically evaluate our algorithm against established benchmarks and find that, even in such large environments, a solution is found up to 31% faster, and with up to 23% more utility, than state-of-the-art approximation algorithms. In addition, our algorithm sends up to 30% fewer messages than current approaches when the set of agents or tasks changes.

Patent
22 Nov 2011
TL;DR: In this paper, the authors present a distributed data analytics method that collects application-specific information in a processing node assigned to perform a task to identify data necessary to perform the task and prioritizes the request relative to other data requests associated with the job.
Abstract: Methods, systems, and computer executable instructions for performing distributed data analytics are provided. In one exemplary embodiment, a method of performing a distributed data analytics job includes collecting application-specific information in a processing node assigned to perform a task to identify data necessary to perform the task. The method also includes requesting a chunk of the necessary data from a storage server based on location information indicating one or more locations of the data chunk and prioritizing the request relative to other data requests associated with the job. The method also includes receiving the data chunk from the storage server in response to the request and storing the data chunk in a memory cache of the processing node which uses a same file system as the storage server.

Proceedings Article
07 Aug 2011
TL;DR: This work introduces a novel distributed algorithm for multi-agent task allocation problems where the sets of tasks and agents constantly change over time, and builds on an existing anytime algorithm, and gives it significant new capabilities: namely, an online pruning procedure that simplifies the problem, and a branch-and-bound technique that reduces the search space.
Abstract: We introduce a novel distributed algorithm for multi-agent task allocation problems where the sets of tasks and agents constantly change over time. We build on an existing anytime algorithm (fast-max-sum), and give it significant new capabilities: namely, an online pruning procedure that simplifies the problem, and a branch-and-bound technique that reduces the search space. This allows us to scale to problems with hundreds of tasks and agents. We empirically evaluate our algorithm against established benchmarks and find that, even in such large environments, a solution is found up to 31% faster, and with up to 23% more utility, than state-of-the-art approximation algorithms. In addition, our algorithm sends up to 30% fewer messages than current approaches when the set of agents or tasks changes.

Journal ArticleDOI
TL;DR: This paper defines multitasking based on the principles of task independence and performance concurrency and develops a set of metrics for computer-based multitasking, which range from a lean dichotomous variable to a richer measure based on switches.
Abstract: Multitasking is the result of time allocation decisions made by individuals faced with multiple tasks. Multitasking research is important in order to improve the design of systems and applications. Since people typically use computers to perform multiple tasks at the same time, insights into this type of behavior can help develop better systems and ideal types of computer environments for modern multitasking users. In this paper, we define multitasking based on the principles of task independence and performance concurrency and develop a set of metrics for computer-based multitasking. The theoretical foundation of this metric development effort stems from an application of key principles of Activity Theory and a systematic analysis of computer usage from the perspective of the user, the task and the technology. The proposed metrics, which range from a lean dichotomous variable to a richer measure based on switches, were validated with data from a sample of users who self-reported their activities during a computer usage session. This set of metrics can be used to establish a conceptual and methodological foundation for future multitasking studies.

Patent
25 Mar 2011
TL;DR: In this article, a plurality of tasks by a processor system are monitored and tasks requiring adjustment of performance resources are identified by calculating at least one of a progress error or a progress limit error for each task, and performance resources of the processor system allocated to each identified task are adjusted.
Abstract: Execution of a plurality of tasks by a processor system are monitored. Based on this monitoring, tasks requiring adjustment of performance resources are identified by calculating at least one of a progress error or a progress limit error for each task. Thereafter, performance resources of the processor system allocated to each identified task are adjusted. Such adjustment can comprise: adjusting a clock rate of at least one processor in the processor system executing the task, adjusting an amount of cache and/or buffers to be utilized by the task, and/or adjusting an amount of input/output (I/O) bandwidth to be utilized by the task. Related systems, apparatus, methods and articles are also described.

Patent
17 Oct 2011
TL;DR: In this article, a system and methods for evaluating a worker in performing crowd-sourced tasks and providing in-task training are disclosed. But the evaluation of the worker's work quality is not discussed.
Abstract: Systems and methods for evaluating a worker in performing crowd sourced tasks and providing in-task training are disclosed. In one aspect, embodiments of the present disclosure include a method, which may be implemented on a system, for selecting a job distributed through a job distribution platform for workers to work on, for use to generate a test task, the job being associated with a known correct result, associating a manipulated result, known to be an incorrect result for the job, with the job to generate the test task, and/or presenting the job with the manipulated result as the test task to a worker for evaluation of work quality of the worker. The job distribution platform crowd sources tasks online to workers to work on via their respective computing devices.

Proceedings Article
01 Jan 2011
TL;DR: This work presents a new method for automating task and workflow design for high-level, complex tasks, which is recursive, recruiting workers from the crowd to help plan out how problems can be solved most effectively.
Abstract: On today's human computation systems, designing tasks and workflows is a difficult and labor-intensive process. Can workers from the crowd be used to help plan workflows? We explore this question with Turkomatic, a new interface to microwork platforms that uses crowd workers to help plan workflows for complex tasks. Turkomatic uses a general-purpose divide-and-conquer algorithm to solve arbitrary natural-language requests posed by end users. The interface includes a novel real-time visual workflow editor that enables requesters to observe and edit workflows while the tasks are being completed. Crowd verification of work and the division of labor among members of the crowd can be handled automatically by Turkomatic, which substantially simplifies the process of using human computation systems. These features enable a novel means of interaction with crowds of online workers to support successful execution of complex work.

Patent
11 Jul 2011
TL;DR: In this paper, a user's demands on execution of a number of tasks, each task including a demand trace, can be defined as Service Level Agreement (SLA) information, including one or more Class of Service (CoS) levels defined by a Base Resource Entitlement (BRE) criteria and a Reserved RRE (RRE) criteria.
Abstract: Methods, apparatus, and computer readable media with executable instructions stored thereon for virtual machine placement are provided. A user's demands on execution of a number of tasks, each task including a demand trace, can be defined as Service Level Agreement (SLA) information, including one or more Class of Service (CoS) levels defined by a Base Resource Entitlement (BRE) criteria and a Reserved Resource Entitlement (RRE) criteria (222). A highest CoS level of the one or more CoS levels can be selected (224) and the tasks within the Cos level can be load-balanced across a pool of servers (226). At least a portion of the RRE criteria can be removed from the demand trace of the selected CoS level (228). The selecting, load-balancing, and removing steps can be repeated until there are no more CoS levels (230).

Proceedings ArticleDOI
12 Feb 2011
TL;DR: OOJava is a compiler-assisted approach that leverages developer annotations along with static analysis to provide an easy-to-use deterministic parallel programming model that executes tasks as soon as their data dependences are resolved.
Abstract: Developing parallel software using current tools can be challenging. Even experts find it difficult to reason about the use of locks and often accidentally introduce race conditions and deadlocks into parallel software. OoOJava is a compiler-assisted approach that leverages developer annotations along with static analysis to provide an easy-to-use deterministic parallel programming model. OoOJava extends Java with a task annotation that instructs the compiler to consider a code block for out-of-order execution. OoOJava executes tasks as soon as their data dependences are resolved and guarantees that the execution of an annotated program preserves the exact semantics of the original sequential program. We have implemented OoOJava and achieved an average speedup of 16.6x on our ten benchmarks.

Patent
10 Feb 2011
TL;DR: In this article, a method of dynamically allocating a task or a signal on a statically allocated and embedded software architecture of a vehicle includes identifying a faulty component, which may include a software component, a hardware component or a communication link between components.
Abstract: A method of dynamically allocating a task or a signal on a statically allocated and embedded software architecture of a vehicle includes identifying a faulty component. The faulty component may include a software component, a hardware component or a signal or communications link between components. Once the faulty component is identified, any tasks performed by or signals associated with the faulty component are identified, and the tasks performed by or the signals associated with the faulty component are re-allocated to an embedded standby component so that performance of the re-allocated task and/or signal for future system operations is performed by the standby component.

Journal ArticleDOI
TL;DR: In this paper, the authors present a programming language that can easily exploit multiple cores, but it requires substantial effort to write parallel code using multiple threads or even a higher-level API.
Abstract: CPUs are no longer getting faster. Instead, CPU manufacturers now package multiple cores in each CPU and ask us developers to put them to good use. Writing parallel code using multiple threads or even a higher-level API is a fiendishly difficult task. An alternative approach involves using a programming language that can easily exploit multiple cores, but it requires substantial effort. A third way involves faking your application's multicore-handling dexterity by handing over this responsibility. At the highest level, it's easy to put multiple cores to work if your application serves Web requests (pass them to a Web application server) or dishes out SQL statements (via a commercial RDBMS). Another high-level way to utilize multiple cores is to let the operating system do it for you by splitting your processing among independent processes. You can do this by using pipelines of communicating processes, by splitting the work among many instances of the same process with GNU parallel, or by parallelizing independent work items with make -j. At the application level, you can do the same by employing the map-reduce and filter-reduce techniques.

Journal ArticleDOI
TL;DR: A method to calculate tight upper bounds on the maximum number of possible preemptions for each job of a task and, considering the worst-case placement of these preemption points, derive a much tighter bound on its WCET, showing significant improvements in the bounds derived.
Abstract: Data caches are an increasingly important architectural feature in most modern computer systems. They help bridge the gap between processor speeds and memory access times. One inherent difficulty of using data caches in a real-time system is the unpredictability of memory accesses, which makes it difficult to calculate worst-case execution times (WCETs) of real-time tasks.While cache analysis for single real-time tasks has been the focus of much research in the past, bounding the preemption delay in a multitask preemptive environment is a challenging problem, particularly for data caches.This article makes multiple contributions in the context of independent, periodic tasks with deadlines less than or equal to their periods executing on a single processor.1) For every task, we derive data cache reference patterns for all scalar and nonscalar references. These patterns are used to derive an upper bound on the WCET of real-time tasks.2) We show that, when considering cache preemption effects, the critical instant does not occur upon simultaneous release of all tasks. We provide results for task sets with phase differences to prove our claim.3) We develop a method to calculate tight upper bounds on the maximum number of possible preemptions for each job of a task and, considering the worst-case placement of these preemption points, derive a much tighter bound on its WCET. We provide results using both static-and dynamic-priority schemes.Our results show significant improvements in the bounds derived. We achieve up to an order of magnitude improvement over two prior methods and up to half an order of magnitude over a third prior method for the number of preemptions, the WCET and the response time of a task. Consideration of the best-case and worst-case execution times of higher-priority jobs enables these improvements.

Patent
15 Aug 2011
TL;DR: In this paper, techniques for integrating cloud applications and remote jobs are presented, where a request message may be transmitted from the first computing system to the second computing system, which may be controlled by the second entity.
Abstract: Disclosed herein are techniques for integrating cloud applications and remote jobs. In some implementations, a request to initiate a remote execution procedure may be received at a first computing system. The first computing system may be controlled by a first entity and may be configured to provide on-demand computing services to a plurality of entities including a second entity. The remote execution procedure may include an instruction to perform a remote computing task capable of being performed by a second computing system. A request message may be transmitted from the first computing system to the second computing system, which may be controlled by the second entity. The request message may include an instruction to perform the remote computing task. A response message indicating a result of performing the remote computing task may be received from the second computing system.

Book ChapterDOI
20 Sep 2011
TL;DR: This paper studies notions of locality that are inherent to the specification of a distributed task and independent of the computing environment, in a shared memory wait-free system and identifies a locality property called projection-closed, that completely characterizes tasks that areWait-free checkable.
Abstract: This paper studies notions of locality that are inherent to the specification of a distributed task and independent of the computing environment, in a shared memory wait-free system. A locality property called projection-closed is identified, that completely characterizes tasks that are wait-free checkable. A task T = (I,O,Δ) is checkable if there exists a wait-free distributed algorithm that, given s ∈ I and t ∈ O, determines whether t ∈ Δ(s), i.e., if t is a valid output for s according to the specification of T. Moreover, determining whether a projection-closed task is wait-free solvable remains undecidable, and hence this is a rich class of tasks. A stronger notion of locality considers tasks where the outputs look identically to the inputs at every vertex (input value of a process). A task T = (I,O,Δ) is said to be locality-preserving if O is a covering complex of I. This topological property yields obstacles for wait-free solvability different in nature from the classical agreement impossibility results. On the other hand, locality-preserving tasks are projection-closed and therefore always wait-free checkable. A classification of localitypreserving tasks in term of their relative computational power is provided. A correspondence between locality-preserving tasks and subgroups of the edgepath group of an input complex shows the existence of hierarchies of locality-preserving tasks, each one containing at the top the universal task (induced by the universal covering complex), and at the bottom the trivial identity task.

Journal ArticleDOI
TL;DR: This article presents a dataflow model that allows tasks to have loops with an unbounded number of iterations that allows to guarantee satisfaction of a throughput constraint over different modes of a stream processing application, such as the synchronization and synchronized mode of a digital radio receiver.
Abstract: Increasingly, stream-processing applications include complex control structures to better adapt to changing conditions in their environment. This adaptivity often results in task execution rates that are dependent on the processed stream. Current approaches to compute buffer capacities that are sufficient to satisfy a throughput constraint have limited applicability in case of data-dependent task execution rates.In this article, we present a dataflow model that allows tasks to have loops with an unbounded number of iterations. For instances of this dataflow model, we present efficient checks on their validity. Furthermore, we present an efficient algorithm to compute buffer capacities that are sufficient to satisfy a throughput constraint.This allows to guarantee satisfaction of a throughput constraint over different modes of a stream processing application, such as the synchronization and synchronized modes of a digital radio receiver.

Patent
08 Nov 2011
TL;DR: In this paper, a method and apparatus for virtualizing industrial vehicles to automate task execution in a physical environment is described, which includes determining input parameters for controlling vehicle hardware components and correlating the mappings with vehicle commands to produce abstraction information and executing at least one task comprising various ones of the vehicle commands using the abstraction information.
Abstract: A method and apparatus for virtualizing industrial vehicles to automate task execution in a physical environment is described. In one embodiment, the method includes determining input parameters for controlling vehicle hardware components, wherein the vehicle hardware components comprise actuators that are used to control hardware component operations, generating mappings between the input parameters and the hardware component operations, wherein each of the input parameters is applied to an actuator to perform an corresponding hardware component operation, correlating the mappings with vehicle commands to produce abstraction information and executing at least one task comprising various ones of the vehicle commands using the abstraction information.

Journal ArticleDOI
TL;DR: This book argues that putting a Conceptual Model at the center of the design and development process can pay rich dividends: designs that are simpler and mesh better with users' tasks, avoidance of unnecessary features, easier documentation, faster development, improved customer uptake, and decreased need for training and customer support.
Abstract: People make use of software applications in their activities, applying them as tools in carrying out tasks. That this use should be good for people--easy, effective, efficient, and enjoyable--is a principal goal of design. In this book, we present the notion of Conceptual Models, and argue that Conceptual Models are core to achieving good design. From years of helping companies create software applications, we have come to believe that building applications without Conceptual Models is just asking for designs that will be confusing and difficult to learn, remember, and use. We show how Conceptual Models are the central link between the elements involved in application use: people's tasks (task domains), the use of tools to perform the tasks, the conceptual structure of those tools, the presentation of the conceptual model (i.e., the user interface), the language used to describe it, its implementation, and the learning that people must do to use the application. We further show that putting a Conceptual Model at the center of the design and development process can pay rich dividends: designs that are simpler and mesh better with users' tasks, avoidance of unnecessary features, easier documentation, faster development, improved customer uptake, and decreased need for training and customer support. Table of Contents: Using Tools / Start with the Conceptual Model / Definition / Structure / Example / Essential Modeling / Optional Modeling / Process / Value / Epilogue

Patent
14 Dec 2011
TL;DR: In this article, an action alignment system for event planning and execution searches out web sites relating to event planning, and constructs a database of various tasks that might be desired for different events (tasks can also be manually added to the database).
Abstract: An action alignment system for event planning and execution searches out web sites relating to event planning and, based on web site content, constructs a database of various tasks that might be desired for different events (tasks can also be manually added to the database). The tasks have associated tags which allow a task search engine to match a user query representing a proposed event to potential tasks. This list of potential tasks is presented to the user who may then select the tasks as desired to customize the event plan. Vendors can provide pre-packaged deals for the tasks, and this information can be included with the task database, selected by the user, and added to the event plan. A scheduler and alert engine then inserts appropriate entries into the user's calendar, and sends timely alerts to the user which include links that simplify event management.