scispace - formally typeset
Search or ask a question

Showing papers on "Task (computing) published in 2013"


Journal ArticleDOI
TL;DR: A multi-level typology of visualization tasks is contributed to address the gap between why and how a visualization task is performed, as well as what the task inputs and outputs are.
Abstract: The considerable previous work characterizing visualization usage has focused on low-level tasks or interactions and high-level tasks, leaving a gap between them that is not addressed. This gap leads to a lack of distinction between the ends and means of a task, limiting the potential for rigorous analysis. We contribute a multi-level typology of visualization tasks to address this gap, distinguishing why and how a visualization task is performed, as well as what the task inputs and outputs are. Our typology allows complex tasks to be expressed as sequences of interdependent simpler tasks, resulting in concise and flexible descriptions for tasks of varying complexity and scope. It provides abstract rather than domain-specific descriptions of tasks, so that useful comparisons can be made between visualization systems targeted at different application domains. This descriptive power supports a level of analysis required for the generation of new designs, by guiding the translation of domain-specific problems into abstract tasks, and for the qualitative evaluation of visualization usage. We demonstrate the benefits of our approach in a detailed case study, comparing task descriptions from our typology to those derived from related work. We also discuss the similarities and differences between our typology and over two dozen extant classification systems and theoretical frameworks from the literatures of visualization, human-computer interaction, information retrieval, communications, and cartography.

655 citations


Journal ArticleDOI
TL;DR: Across 2 experiments, heavy media multitaskers were better able to switch between tasks in the task-switching paradigm, and media multitasking was not associated with increased ability to process 2 tasks in parallel, it was associated with an increased able to shift between discrete tasks.
Abstract: The recent rise in media use has prompted researchers to investigate its influence on users' basic cognitive processes, such as attention and cognitive control. However, most of these investigations have failed to consider that the rise in media use has been accompanied by an even more dramatic rise in media multitasking (engaging with multiple forms of media simultaneously). Here we investigate how one's ability to switch between 2 tasks and to perform 2 tasks simultaneously is associated with media multitasking experience. Participants saw displays comprised of a number-letter pair and classified the number as odd or even and/or the letter as a consonant or vowel. In task-switching blocks, a cue indicated which classification to perform on each trial. In dual-task blocks, participants performed both classifications. Heavy and light media multitaskers showed comparable performance in the dual-task. Across 2 experiments, heavy media multitaskers were better able to switch between tasks in the task-switching paradigm. Thus, while media multitasking was not associated with increased ability to process 2 tasks in parallel, it was associated with an increased ability to shift between discrete tasks.

157 citations


Patent
10 Sep 2013
TL;DR: In this article, a fleet of multiple redundant mobile robots managed by a task coordinator is deployed to track solar panels in a solar farm in alignment with the sun, each robot has a control unit for engaging with a coupler connected to one or multiple solar panels and adjusting their orientation, as well as communicating with the task coordinator to receive tasks.
Abstract: The present invention relates to a highly-available and fault-tolerant solar tracking system and the process required to manage such a system. A fleet of multiple, redundant mobile robots managed by a task coordinator is deployed to track solar panels in a solar farm in alignment with the sun. Each robot has a control unit for engaging with a coupler connected to one or multiple solar panels and adjusting their orientation, as well as communicating with the task coordinator to receive tasks. The task coordinator senses various events such as robot failure/deterioration, as well as various environmental conditions, and sends tasks reconciled with event types. The system is highly-available and fault-tolerant as it remains operational as long as there is one operational robot. The task coordinator assigns tasks to the mobile robots so as to optimize battery life or other factors, such as, e.g., overall maintenance costs across the fleet.

138 citations


Proceedings ArticleDOI
03 Dec 2013
TL;DR: GPUSync is described, which is a framework for managing graphics processing units (GPUs) in multi-GPU multicore real-time systems and provides budget policing to the extent possible, given that GPU access is non-preemptive.
Abstract: This paper describes GPUSync, which is a framework for managing graphics processing units (GPUs) in multi-GPU multicore real-time systems. GPUSync was designed with flexibility, predictability, and parallelism in mind. Specifically, it can be applied under either static-or dynamic priority CPU scheduling, can allocate CPUs/GPUs on a partitioned, clustered, or global basis, provides flexible mechanisms for allocating GPUs to tasks, enables task state to be migrated among different GPUs, with the potential of breaking such state into smaller "chunks", provides migration cost predictors that determine when migrations can be effective, enables a single GPU's different engines to be accessed in parallel, properly supports GPU-related interrupt and worker threads according to the sporadic task model, even when GPU drivers are closed-source, and provides budget policing to the extent possible, given that GPU access is non-preemptive. No prior real-time GPU management framework provides a comparable range of features.

122 citations


Journal ArticleDOI
TL;DR: This paper employs both the analytical and simulation modeling to addresses the complexity of cloud computing systems to obtain important performance metrics such as task blocking probability and total waiting time incurred on user requests.
Abstract: Accurate performance evaluation of cloud computing resources is a necessary prerequisite for ensuring that quality of service parameters remain within agreed limits. In this paper, we employ both the analytical and simulation modeling to addresses the complexity of cloud computing systems. Analytical model is comprised of distinct functional submodels, the results of which are combined in an iterative manner to obtain the solution with required accuracy. Our models incorporate the important features of cloud centers such as batch arrival of user requests, resource virtualization, and realistic servicing steps, to obtain important performance metrics such as task blocking probability and total waiting time incurred on user requests. Also, our results reveal important insights for capacity planning to control delay of servicing users requests.

111 citations


Proceedings Article
13 May 2013
TL;DR: It is argued for breaking data-parallel jobs in compute clusters into tiny tasks that each complete in hundreds of milliseconds, and a 5.2× improvement in response times is demonstrated due to the use of smaller tasks.
Abstract: We argue for breaking data-parallel jobs in compute clusters into tiny tasks that each complete in hundreds of milliseconds. Tiny tasks avoid the need for complex skew mitigation techniques: by breaking a large job into millions of tiny tasks, work will be evenly spread over available resources by the scheduler. Furthermore, tiny tasks alleviate long wait times seen in today's clusters for interactive jobs: even large batch jobs can be split into small tasks that finish quickly. We demonstrate a 5.2× improvement in response times due to the use of smaller tasks. In current data-parallel computing frameworks, high task launch overheads and scalability limitations prevent users from running short tasks. Recent research has addressed many of these bottlenecks; we discuss remaining challenges and propose a task execution framework that can efficiently support tiny tasks.

101 citations


Journal ArticleDOI
Kris Hauser1
TL;DR: Simulations suggest that the notion of freeform tasks that encode an infinite number of possible goals within a finite set of types enables the robot to reach intended targets faster and to track intended trajectories more closely than comparable techniques.
Abstract: The approach of inferring user's intended task and optimizing low-level robot motions has promise for making robot teleoperation interfaces more intuitive and responsive. But most existing methods assume a finite set of candidate tasks, which limits a robot's functionality. This paper proposes the notion of freeform tasks that encode an infinite number of possible goals (e.g., desired target positions) within a finite set of types (e.g., reach, orient, pick up). It also presents two technical contributions to help make freeform UIs possible. First, an intent predictor estimates the user's desired task, and accepts freeform tasks that include both discrete types and continuous parameters. Second, a cooperative motion planner continuously updates the robot's trajectories to achieve the inferred tasks by repeatedly solving optimal control problems. The planner is designed to respond interactively to changes in the indicated task, avoid collisions in cluttered environments, handle time-varying objective functions, and achieve high-quality motions using a hybrid of numerical and sampling-based techniques. The system is applied to the problem of controlling a 6D robot manipulator using 2D mouse input in the context of two tasks: static target reaching and dynamic trajectory tracking. Simulations suggest that it enables the robot to reach intended targets faster and to track intended trajectories more closely than comparable techniques.

96 citations


Journal ArticleDOI
TL;DR: This paper addresses the issue of how to optimally reconfigure and map an existing VN while the VN request changes, and model this problem as a mathematical optimization problem with the objective of minimizing the reconfiguration cost by using mixed integer linear programming.

86 citations


Journal ArticleDOI
TL;DR: An analytical performance model is proposed that addresses the complexity of cloud centers through distinct stochastic submodels, the results of which are integrated to obtain the overall solution.
Abstract: In this paper, we propose an analytical performance model that addresses the complexity of cloud centers through distinct stochastic submodels, the results of which are integrated to obtain the overall solution. Our model incorporates the important aspects of cloud centers such as pool management, compound requests (i.e., a set of requests submitted by one user simultaneously), resource virtualization and realistic servicing steps. In this manner, we obtain not only a detailed assessment of cloud center performance, but also clear insights into equilibrium arrangement and capacity planning that allows servicing delays, task rejection probability, and power consumption to be kept under control.

82 citations


Proceedings ArticleDOI
04 Nov 2013
TL;DR: This paper proposes a set of techniques to mine the memory accesses made by an operating system and its applications to locate useful places to deploy active monitoring, which they are called tap points.
Abstract: The ability to introspect into the behavior of software at runtime is crucial for many security-related tasks, such as virtual machine-based intrusion detection and low-artifact malware analysis. Although some progress has been made in this task by automatically creating programs that can passively retrieve kernel-level information, two key challenges remain. First, it is currently difficult to extract useful information from user-level applications, such as web browsers. Second, discovering points within the OS and applications to hook for active monitoring is still an entirely manual process. In this paper we propose a set of techniques to mine the memory accesses made by an operating system and its applications to locate useful places to deploy active monitoring, which we call tap points. We demonstrate the efficacy of our techniques by finding tap points for useful introspection tasks such as finding SSL keys and monitoring web browser activity on five different operating systems (Windows 7, Linux, FreeBSD, Minix and Haiku) and two processor architectures (ARM and x86).

73 citations


Journal ArticleDOI
TL;DR: A model of this process of job design is developed by drawing on a multisite qualitative study of task allocation following the installation of a DNA sequencer and shows that this overall process is far reaching and incorporates many elements, not all of which are explicitly intended for job designs.
Abstract: How are tasks bundled into and across jobs within organizations? In this paper, I develop a model of this process of job design by drawing on a multisite qualitative study of task allocation following the installation of a DNA sequencer. The model that emerges is one of the assembly of tasks through multiple subassembly processes with multiple assemblers. Four activities produced requirements and requests for job designs and propositions about how to meet these: actively searching, passively receiving, doing work, and invoking preexisting ideas. The ideas that emerge from these processes are further transformed through reconciliation, interpretation, and performance. My observations show that this overall process is far reaching and incorporates many elements, not all of which are explicitly intended for job designs. The arrangements that emerge from this process are not the product of a deliberate and controlled job design process within the boundaries of a single organization.

Journal ArticleDOI
TL;DR: This work proposes a two-step methodology for discovering tasks that users try to perform through search engines, and presents a set of query similarity functions based on unsupervised and supervised learning approaches that exploit these functions in order to detect user tasks.
Abstract: Although Web search engines still answer user queries with lists of ten blue links to webpages, people are increasingly issuing queries to accomplish their daily tasks (e.g., finding a recipe, booking a flight, reading online news, etc.). In this work, we propose a two-step methodology for discovering tasks that users try to perform through search engines. First, we identify user tasks from individual user sessions stored in search engine query logs. In our vision, a user task is a set of possibly noncontiguous queries (within a user search session), which refer to the same need. Second, we discover collective tasks by aggregating similar user tasks, possibly performed by distinct users. To discover user tasks, we propose query similarity functions based on unsupervised and supervised learning approaches. We present a set of query clustering methods that exploit these functions in order to detect user tasks. All the proposed solutions were evaluated on a manually-built ground truth, and two of them performed better than state-of-the-art approaches. To detect collective tasks, we propose four methods that cluster previously discovered user tasks, which in turn are represented by the bag-of-words extracted from their composing queries. These solutions were also evaluated on another manually-built ground truth.

Journal ArticleDOI
TL;DR: This paper develops an analytical model and validate it with an independent simulation model and shows that the performance of a cloud center may be improved if incoming requests are partitioned on the basis of the coefficient of variation of service time and batch size.
Abstract: In this paper, we evaluate the performance of cloud centers with high degree of virtualization and Poisson batch task arrivals. To this end, we develop an analytical model and validate it with an independent simulation model. Task service times are modeled with a general probability distribution, but the model also accounts for the deterioration of performance due to the workload at each node. The model allows for calculation of important performance indicators such as mean response time, waiting time in the queue, queue length, blocking probability, probability of immediate service, and probability distribution of the number of tasks in the system. Furthermore, we show that the performance of a cloud center may be improved if incoming requests are partitioned on the basis of the coefficient of variation of service time and batch size.

Patent
19 Nov 2013
TL;DR: In this article, a wearable computing device that includes one or more processors and a memory is coupled to the processors and includes instructions executable by the processors, when executing the instructions, the processors determine whether an application is running on the wearable computing devices.
Abstract: In one embodiment, an apparatus includes a wearable computing device that includes one or more processors and a memory. The memory is coupled to the processors and includes instructions executable by the processors. When executing the instructions, the processors determine whether an application is running on the wearable computing device. The application controls one or more functions of a remote computing device. The processors determine to delegate a task associated with the application; delegate the task to be processed by a local computing device; and receive from the local computing device results from processing the delegated task.

Patent
Liang You1, Nan Qiao1, Jun Jin1
19 Sep 2013
TL;DR: In this paper, the authors present an apparatus to assign processor component cores to perform task portions of a task among individual cores of one or more processor components of each processing device of a distributed processing system.
Abstract: Various embodiments are generally directed to techniques for assigning portions of a task among individual cores of one or more processor components of each processing device of a distributed processing system. An apparatus to assign processor component cores to perform task portions includes a processor component; an interface to couple the processor component to a network to receive data that indicates available cores of base and subsystem processor components of processing devices of a distributed processing system, the subsystem processor components made accessible on the network through the base processor components; and a core selection component for execution by the processor component to select cores from among the available cores to execute instances of task portion routines of a task based on a selected balance point between compute time and power consumption needed to execute the instances of the task portion routines. Other embodiments are described and claimed.

Proceedings ArticleDOI
01 Jan 2013
TL;DR: The mobile device is modeled as a semi-Markov decision process (SMDP) and the optimization problem to set the DVFS level and the transmission rate is effectively solved by linear programming combined with a one-dimensional heuristic search.
Abstract: The finite and rather small battery energy capacity in today's mobile devices has limited the functionality that can be integrated into these platforms or the performance and quality of applications that can be delivered to the users. In the last few years, there is a trend toward offloading certain computation-intensive and latency-tolerant local applications and service requests to a mobile cloud computing (MCC) system so as to save the precious battery life while providing the services requested by the users. Each mobile application can be thought of as a sequence of tasks that are executed locally or remotely. In this paper, the problem of optimal task dispatch, transmission, and execution onto the MCC system is considered. To achieve a good balance between the application execution time and power consumption, dynamic voltage and frequency scaling (DVFS) is applied to the local processor in the mobile device, while the transmitter can choose among multiple modulation schemes and bit rates. The rate capacity effect of a battery and power conversion losses in the mobile device are also accounted for so as to have a more realistic model of the remaining battery life. The mobile device is modeled as a semi-Markov decision process (SMDP) and the optimization problem to set the DVFS level and the transmission rate is effectively solved by linear programming combined with a one-dimensional heuristic search. Experimental results show that the proposed algorithm consistently outperforms some baseline algorithms.

Journal ArticleDOI
TL;DR: A heuristic procedure providing a compromise between the objective function and the suggested stability measure is developed and evaluated on benchmark data sets.

Journal ArticleDOI
TL;DR: This paper presents a framework that automatically learns and adapts execution models for arbitrary algorithms on any (co-)processor, and uses the execution models to distribute a workload of database operators on available (co-processing devices).

Patent
08 May 2013
TL;DR: In this paper, a system of cloud computing application automatic deployment comprises a client-side, a submittal module and a clustering processing system, which is used for submitting job requests and submitting the job description information.
Abstract: The invention belongs to the technical field of cloud computing application, and particularly relates to a system and a method of cloud computing application automatic deployment. The system of the cloud computing application automatic deployment comprises a client-side, a submittal module and a clustering processing system. The client-side is used for submitting job requests. The submittal module is used for generating job description information according to the job requests, and submitting the job description information. The clustering processing system comprises a task node and a master control node. The task node is used for submitting task node information. The master control node is used for receiving the job description information and the task node information, adding the job description information to a corresponding job description information queue according to the job requests, and deploying a task to the task node according to the job requests and the task node information. The job description information of the system and the method of the cloud computing application automatic deployment is stored in different queues so as to be automatically deployed according to the job requests, time for searching for the job description information is saved, dispatching of the task is benefited, and dispatching performance of cloud computing is improved.

Journal ArticleDOI
TL;DR: After analyzing convergence and communication requirement of the algorithm, a set of numerical simulations is provided to confirm the effectiveness of the proposed approach and to develop an extension of an algorithm proposed in the recent literature.

Proceedings ArticleDOI
18 Mar 2013
TL;DR: This work presents Matador, a framework to embed context-awareness in the presentation and execution of crowd-sensing tasks, and presents the design and prototype implementation of the platform, including an energy-efficient context sampling algorithm.
Abstract: Ubiquity of internet-connected media- and sensor-equipped portable devices is enabling a new class of applications which exploit the power of crowds to perform sensing tasks in the real world. Such paradigm is referred as crowd-sensing, and lies at the intersection of crowd-sourcing and participatory sensing. This has a wide range of potential applications such as direct involvement of citizens into public decision making. In this work we present Matador, a framework to embed context-awareness in the presentation and execution of crowd-sensing tasks. This allows to present the right tasks, to the right users in the right circumstances, and to preserve normal device functioning. We present the design and prototype implementation of the platform, including an energy-efficient context sampling algorithm. We validate the proposed approach through a numerical study and a small pilot, and demonstrate the ability of the proposed system to efficiently deliver crowd-sensing tasks, while minimizing the consumption of mobile device resources.

Patent
17 Sep 2013
TL;DR: An automated method for managing a work flow in a process plant includes determining the steps of performing a work item and generating associated displays for an operator or other personnel to perform the steps as discussed by the authors.
Abstract: An automated method for managing a work flow in a process plant includes determining the steps of performing a work item and generating associated displays for an operator or other personnel to perform the steps of the work item. A work item is created specifying a task to be performed in the process plant and determining from the specified task a set of procedures for execution of the work item. For each procedure, an associated display is generated and the associated displays are displayed on a mobile user interface device sequentially in an order in which the set of procedures are to be performed.

Proceedings ArticleDOI
01 Aug 2013
TL;DR: Compared to the state-of-the-art Global EDF-VD scheduler, the superior performance of the proposed schemes in terms of improving the service levels of low-criticality tasks is confirmed through extensive simulations.
Abstract: The Elastic Mixed-Criticality (E-MC) task model and an Early-Release EDF (ER-EDF) scheduling algorithm have been studied to address the service interruption problem for low-criticality tasks in uniprocessor systems. In this paper, focusing on multicore systems, we first investigate the schedulability of E-MC tasks under partitioned-EDF (P-EDF) by considering various task-to-core mapping heuristics. Then, with and without task migrations being considered, we study both global and local early-release schemes. Compared to the state-of-the-art Global EDF-VD scheduler, the superior performance of the proposed schemes in terms of improving the service levels of low-criticality tasks is confirmed through extensive simulations.

Proceedings ArticleDOI
01 Mar 2013
TL;DR: An algorithm which considered Preemptable task execution and multiple SLA parameters such as memory, network bandwidth, and required CPU time is proposed and obtained experimental results show that in a situation where resource contention is fierce the algorithm provides better utilization of resources.
Abstract: Today Cloud computing is on demand as it offers dynamic flexible resource allocation, for reliable and guaranteed services in pay-as-you-use manner, to Cloud service users. So there must be a provision that all resources are made available to requesting users in efficient manner to satisfy their needs. This resource provision is done by considering the Service Level Agreements (SLA) and with the help of parallel processing. Recent work considers various strategies with single SLA parameter. Hence by considering multiple SLA parameter and resource allocation by preemption mechanism for high priority task execution can improve the resource utilization in Cloud. In this paper we propose an algorithm which considered Preemptable task execution and multiple SLA parameters such as memory, network bandwidth, and required CPU time. An obtained experimental results show that in a situation where resource contention is fierce our algorithm provides better utilization of resources.

Journal ArticleDOI
TL;DR: A History-based Auto-Tuning (HAT) MapReduce scheduler, which calculates the progress of tasks accurately and adapts to the continuously varying environment automatically and can significantly improve the performance of Map Reduce applications.
Abstract: In MapReduce model, a job is divided into a series of map tasks and reduce tasks. The execution time of the job is prolonged by some slow tasks seriously, especially in heterogeneous environments. To finish the slow tasks as soon as possible, current MapReduce schedulers launch a backup task on other nodes for each of the slow tasks. However, traditional MapReduce schedulers cannot detect slow tasks correctly since they cannot estimate the progress of tasks accurately (Hadoop home page http://hadoop.apache.org/ , 2011; Zaharia et al. in 8th USENIX symposium on operating systems design and implementation, ACM, New York, pp. 29---42, 2008). To solve this problem, this paper proposes a History-based Auto-Tuning (HAT) MapReduce scheduler, which calculates the progress of tasks accurately and adapts to the continuously varying environment automatically. HAT tunes the weight of each phase of a map task and a reduce task according to the value of them in history tasks and uses the accurate weights of the phases to calculate the progress of current tasks. Based on the accurate-calculated progress of tasks, HAT estimates the remaining time of tasks accurately and further launches backup tasks for the tasks that have the longest remaining time. Experimental results show that HAT can significantly improve the performance of MapReduce applications up to 37% compared with Hadoop and up to 16% compared with LATE scheduler.

Proceedings ArticleDOI
29 Sep 2013
TL;DR: It is shown, that switching between monitor configurations allows to dynamically reassign timing slack between tasks and thereby achieve better resource utilization and still provide the same timing guarantees.
Abstract: We present a scheme for monitoring activation patterns of multiple tasks in mixed-criticality real-time systems. Unlike previous approaches, which enforce a single pre-defined activation pattern bound per task, we propose a multi-mode approach, where monitors can dynamically switch between different configurations, depending on the observed activation pattern at other tasks. The required configurations are based on real-time interfaces which we determine through sensitivity analysis. In an evaluation we show, that switching between monitor configurations allows to dynamically reassign timing slack between tasks and thereby achieve better resource utilization and still provide the same timing guarantees.

Patent
16 Aug 2013
TL;DR: In this paper, the instant invention includes a computer system that includes at least the following components: a first computer that divides a computer file into a plurality of segments, compressing segments, and sending the compressed segments to a second computer over a network.
Abstract: In one embodiment, the instant invention includes a computer system that includes at least the following components: a) a first computer that performs, in concurrent manner, at least the following tasks: dividing a computer file into a plurality of segments, compressing segments, and sending the compressed segments to a second computer over a network; b) the second computer that performs, in concurrent manner, at least the following tasks: decompressing the compressed segments and assembling the decompressed segment to reconstruct the computer file, where the compressing task performed by the first computer and the decompressing task performed by the second computer are synchronized and performed concurrently.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: This paper describes an extension of TSP to model a problem of finding an optimal sequence of tasks with an extra degree of freedom and proposes a new, efficient heuristic to solve such problems and show its applicability.
Abstract: Production speed and energy efficiency are crucial factors for any application scenario in industrial robotics. The most important factor for this is planning of an optimized sequence of atomic subtasks. In a welding scenario, an atomic subtask could be understood as a single welding seam/spot while the sequence could be the ordering of these atomic tasks. Optimization of a task sequence is normally modeled as the Traveling Salesman Problem (TSP). This works well for simple scenarios with atomic tasks without execution freedom like spot welding. However, many types of tasks allow a certain freedom of execution. A simple example is seam welding of a closed-contour, where typically the starting-ending point is not specified by the application. This extra degree of freedom allows for much more efficient task sequencing. In this paper, we describe an extension of TSP to model a problem of finding an optimal sequence of tasks with such extra degree of freedom. We propose a new, efficient heuristic to solve such problems and show its applicability. Obtained computational results are close to the optimum on small instances and outperforms the state of the art approaches on benchmarks available in literature.

Patent
18 Mar 2013
TL;DR: In this article, the authors present a system and methods for building an intelligent assistant that can take in human requests/commands in simple text form, especially in natural language format, and perform tasks for users.
Abstract: Systems and methods disclosed herein relates to building an intelligent assistant that can take in human requests/commands in simple text form, especially in natural language format, and perform tasks for users. Systems and methods are disclosed herein in which knowledge of how to interpret users' requests and carry out tasks, including how to find and manipulate information on the Internet, can be learned from users by the designed assistant, and the knowledge can be subsequently used by the assistant to perform tasks for users. Using the disclosed methods, the designed assistant enables a user to teach the assistant, by actually performing the task manually through the provided user interface, and/or by referring to some knowledge that the assistant already knows; the designed assistant may generate more generic knowledge based on what it learns, and can apply the more generic knowledge to serve requests that it has never seen and never directly learned, and can revise/improve the knowledge according to execution result/feedback. The methods and systems being disclosed here are useful for building an intelligent assistant, especially a universal personal assistant and an intelligent search assistant.

Patent
10 Jul 2013
TL;DR: In this paper, a thread pool processing method and thread pool system capable of fusing synchronous and asynchronous features is presented. But the thread pool is used only for single user operation, the output requirement according to the service sequence is met, and optimization mechanism capable of performing priority processing is provided for the task request with high importance.
Abstract: The invention provides a thread pool processing method and a thread pool processing system capable of fusing synchronous and asynchronous features A large number of task requests on the internet are subjected to asynchronous processing by the thread pool, so that mutual influence is avoided and the waiting time is short Synchronous processing is realized for single user operation, the output requirement according to the service sequence is met, and optimization mechanism capable of performing priority processing is provided for the task request with high importance