scispace - formally typeset
Search or ask a question

Showing papers on "Task (computing) published in 2016"


Proceedings ArticleDOI
01 Aug 2016
TL;DR: It is consistently better to have POS supervision at the innermost rather than the outermost layer, and it is argued that “lowlevel” tasks are better kept at the lower layers, enabling the higher- level tasks to make use of the shared representation of the lower-level tasks.
Abstract: In all previous work on deep multi-task learning we are aware of, all task supervisions are on the same (outermost) layer. We present a multi-task learning architecture with deep bi-directional RNNs, where different tasks supervision can happen at different layers. We present experiments in syntactic chunking and CCG supertagging, coupled with the additional task of POS-tagging. We show that it is consistently better to have POS supervision at the innermost rather than the outermost layer. We argue that this is because “lowlevel” tasks are better kept at the lower layers, enabling the higher-level tasks to make use of the shared representation of the lower-level tasks. Finally, we also show how this architecture can be used for domain adaptation.

476 citations


Posted Content
TL;DR: This paper uses the multi-task learning framework to jointly learn across multiple related tasks based on recurrent neural network to propose three different mechanisms of sharing information to model text with task-specific and shared layers.
Abstract: Neural network based methods have obtained great progress on a variety of natural language processing tasks However, in most previous works, the models are learned based on single-task supervised objectives, which often suffer from insufficient training data In this paper, we use the multi-task learning framework to jointly learn across multiple related tasks Based on recurrent neural network, we propose three different mechanisms of sharing information to model text with task-specific and shared layers The entire network is trained jointly on all these tasks Experiments on four benchmark text classification tasks show that our proposed models can improve the performance of a task with the help of other related tasks

372 citations


Proceedings ArticleDOI
19 Oct 2016
TL;DR: This work describes Chain: a new model for programming intermittent devices that fundamentally differs from state-of-the-art checkpointing approaches and does not incur the associated overhead, and is used to implement four applications: machine learning, encryption, compression, and sensing.
Abstract: Energy harvesting computers enable general-purpose computing using energy collected from their environment. Energy-autonomy of such devices has great potential, but their intermittent power supply poses a challenge. Intermittent program execution compromises progress and leaves state inconsistent. This work describes Chain: a new model for programming intermittent devices. A Chain program is a set of programmer-defined tasks that compute and exchange data through channels. Chain guarantees forward progress at task granularity. A task is restartable and never sees inconsistent state, because its input and output channels are separated. Our system supports language features for expressing advanced data exchange patterns and for encapsulating reusable functionality. Chain fundamentally differs from state-of-the-art checkpointing approaches and does not incur the associated overhead. We implement Chain as C language extensions and a runtime library. We used Chain to implement four applications: machine learning, encryption, compression, and sensing. In experiments, Chain ensured consistency where prior approaches failed and improved throughput by 2-7x over the leading state-of-the-art system.

209 citations


Proceedings Article
04 Nov 2016
TL;DR: The Recurrent Entity Network (EntNet) as mentioned in this paper uses a dynamic long-term memory which allows it to maintain and update a representation of the state of the world as it receives new data.
Abstract: We introduce a new model, the Recurrent Entity Network (EntNet). It is equipped with a dynamic long-term memory which allows it to maintain and update a representation of the state of the world as it receives new data. For language understanding tasks, it can reason on-the-fly as it reads text, not just when it is required to answer a question or respond as is the case for a Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory locations can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Children's Book Test, where it obtains competitive performance, reading the story in a single pass.

203 citations


Book ChapterDOI
Donggeun Yoo1, Namil Kim1, Sunggyun Park1, Anthony S. Paek, In So Kweon1 
08 Oct 2016
TL;DR: The model transfers an input domain to a target domain in semantic level, and generates the target image in pixel level and employs the real/fake-discriminator as in Generative Adversarial Nets to generate realistic target images.
Abstract: We present an image-conditional image generation model. The model transfers an input domain to a target domain in semantic level, and generates the target image in pixel level. To generate realistic target images, we employ the real/fake-discriminator as in Generative Adversarial Nets [6], but also introduce a novel domain-discriminator to make the generated image relevant to the input image. We verify our model through a challenging task of generating a piece of clothing from an input image of a dressed person. We present a high quality clothing dataset containing the two domains, and succeed in demonstrating decent results.

189 citations


Posted Content
TL;DR: The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting, and can generalize past its training horizon.
Abstract: We introduce a new model, the Recurrent Entity Network (EntNet). It is equipped with a dynamic long-term memory which allows it to maintain and update a representation of the state of the world as it receives new data. For language understanding tasks, it can reason on-the-fly as it reads text, not just when it is required to answer a question or respond as is the case for a Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory locations can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Children's Book Test, where it obtains competitive performance, reading the story in a single pass.

142 citations


Patent
16 Aug 2016
TL;DR: In this paper, the authors describe a system and processes for operating a digital assistant in a media environment, where a user can interact with the digital assistant of a media device while content is displayed by the media device.
Abstract: Systems and processes are disclosed for operating a digital assistant in a media environment. In an exemplary embodiment, a user can interact with a digital assistant of a media device while content is displayed by the media device. In one approach, a plurality of exemplary natural language requests can be displayed in response to detecting a user input of a first input type. The plurality of exemplary natural language requests can be contextually-related to the displayed content. In another approach, a user request can be received in response to detecting a user input of a second input type. A task that at least partially satisfies the user request can be performed. The performed task can depend on the nature of the user request and the content being displayed by the media device. In particular, the user request can be satisfied while reducing disruption to user consumption of media content.

139 citations


Posted Content
TL;DR: A joint many-task model together with a strategy for successively growing its depth to solve increasingly complex tasks and uses a simple regularization term to allow for optimizing all model weights to improve one task’s loss without exhibiting catastrophic interference of the other tasks.
Abstract: Transfer and multi-task learning have traditionally focused on either a single source-target pair or very few, similar tasks. Ideally, the linguistic levels of morphology, syntax and semantics would benefit each other by being trained in a single model. We introduce a joint many-task model together with a strategy for successively growing its depth to solve increasingly complex tasks. Higher layers include shortcut connections to lower-level task predictions to reflect linguistic hierarchies. We use a simple regularization term to allow for optimizing all model weights to improve one task's loss without exhibiting catastrophic interference of the other tasks. Our single end-to-end model obtains state-of-the-art or competitive results on five different tasks from tagging, parsing, relatedness, and entailment tasks.

117 citations


Journal Article
TL;DR: Mapping cognitive technologies by how autonomously they work and the tasks they perform shows the current state of smart machines and anticipates how future technologies might unfold.
Abstract: A simple framework that plots cognitive technologies along two dimensions. (See What Today's Cognitive Technologies Can and Cant Do, p. 23.) First, it recognizes that these tools differ according to how autonomously they can apply their intelligence. On the low end, they simply respond to human queries and instructions; at the (still theoretical) high end, they formulate their own objectives. Second, it reflects the type of tasks smart machines are being used to perform, moving from conventional numerical analysis to performance of digital and physical tasks in the real world. The breadth of inputs and data types in real-world tasks makes them more complex for machines to accomplish. Depending on the type of task a manager is targeting for redesigned performance, this framework reveals the various extents to which it might be performed autonomously and by what kinds of machines. The most capable machine learning systems have the ability to learn their decisions get better with more data, and they remember previously ingested information. Mapping cognitive technologies by how autonomously they work and the tasks they perform shows the current state of smart machines and anticipates how future technologies might unfold.

93 citations


Journal ArticleDOI
TL;DR: The proposed method is proven to ensure asymptotic convergence of the equality task errors and the satisfaction of all high-priority set-based tasks.
Abstract: Inverse kinematics algorithms are commonly used in robotic systems to transform tasks to joint references, and several methods exist to ensure the achievement of several tasks simultaneously. The multiple task-priority inverse kinematics framework allows tasks to be considered in a prioritized order by projecting task velocities through the nullspaces of higher priority tasks. This paper extends this framework to handle setbased tasks, i.e. tasks with a range of valid values, in addition to equality tasks, which have a specific desired value. Examples of set-based tasks are joint limit and obstacle avoidance. The proposed method is proven to ensure asymptotic convergence of the equality task errors and the satisfaction of all high-priority set-based tasks. The practical implementation of the proposed algorithm is discussed, and experimental results are presented where a number of both set-based and equality tasks have been implemented on a 6 degree of freedom UR5 which is an industrial robotic arm from Universal Robots. The experiments validate the theoretical results and confirm the effectiveness of the proposed approach.

87 citations


Proceedings Article
05 Dec 2016
TL;DR: In this paper, the authors propose a new architecture, which is called multinet, in which not only deep image features are shared between tasks, but tasks can interact in a recurrent manner by encoding the results of their analysis in a common shared representation of the data.
Abstract: Modern discriminative predictors have been shown to match natural intelligences in specific perceptual tasks in image classification, object and part detection, boundary extraction, etc. However, a major advantage that natural intelligences still have is that they work well for all perceptual problems together, solving them efficiently and coherently in an integrated manner. In order to capture some of these advantages in machine perception, we ask two questions: whether deep neural networks can learn universal image representations, useful not only for a single task but for all of them, and how the solutions to the different tasks can be integrated in this framework. We answer by proposing a new architecture, which we call multinet, in which not only deep image features are shared between tasks, but where tasks can interact in a recurrent manner by encoding the results of their analysis in a common shared representation of the data. In this manner, we show that the performance of individual tasks in standard benchmarks can be improved first by sharing features between them and then, more significantly, by integrating their solutions in the common representation.

Journal ArticleDOI
TL;DR: This work introduces AutoMan, the first fully automatic crowdprogramming system that integrates human-based computations into a standard programming language as ordinary function calls that can be intermixed freely with traditional functions.
Abstract: Humans can perform many tasks with ease that remain difficult or impossible for computers. Crowdsourcing platforms like Amazon's Mechanical Turk make it possible to harness human-based computational power at an unprecedented scale. However, their utility as a general-purpose computational platform remains limited. The lack of complete automation makes it difficult to orchestrate complex or interrelated tasks. Scheduling more human workers to reduce latency costs real money, and jobs must be monitored and rescheduled when workers fail to complete their tasks. Furthermore, it is often difficult to predict the length of time and payment that should be budgeted for a given task. Finally, the results of human-based computations are not necessarily reliable, both because human skills and accuracy vary widely, and because workers have a financial incentive to minimize their effort.This paper introduces AutoMan, the first fully automatic crowdprogramming system. AutoMan integrates human-based computations into a standard programming language as ordinary function calls, which can be intermixed freely with traditional functions. This abstraction lets AutoMan programmers focus on their programming logic. An AutoMan program specifies a confidence level for the overall computation and a budget. The AutoMan runtime system then transparently manages all details necessary for scheduling, pricing, and quality control. AutoMan automatically schedules human tasks for each computation until it achieves the desired confidence level; monitors, reprices, and restarts human tasks as necessary; and maximizes parallelism across human workers while staying under budget.

Journal ArticleDOI
TL;DR: A multi-round version of the well-known principal-agent model, whereby in each round a worker makes a strategic choice of the effort level which is not directly observable by the requester, which significantly generalizes the budget-free online task pricing problems studied in prior work.
Abstract: Crowdsourcing markets have emerged as a popular platform for matching available workers with tasks to complete. The payment for a particular task is typically set by the task's requester, and may be adjusted based on the quality of the completed work, for example, through the use of "bonus" payments. In this paper, we study the requester's problem of dynamically adjusting quality-contingent payments for tasks. We consider a multi-round version of the well-known principal-agent model, whereby in each round a worker makes a strategic choice of the effort level which is not directly observable by the requester. In particular, our formulation significantly generalizes the budget-free online task pricing problems studied in prior work. We treat this problem as a multi-armed bandit problem, with each "arm" representing a potential contract. To cope with the large (and in fact, infinite) number of arms, we propose a new algorithm, AgnosticZooming, which discretizes the contract space into a finite number of regions, effectively treating each region as a single arm. This discretization is adaptively refined, so that more promising regions of the contract space are eventually discretized more finely. We analyze this algorithm, showing that it achieves regret sublinear in the time horizon and substantially improves over non-adaptive discretization (which is the only competing approach in the literature). Our results advance the state of art on several different topics: the theory of crowdsourcing markets, principal-agent problems, multi-armed bandits, and dynamic pricing.

Posted Content
TL;DR: The effectiveness of the transfer method for enabling zero-shot generalization with a variety of robots and tasks in simulation for both visual and non-visual tasks is demonstrated.
Abstract: Reinforcement learning (RL) can automate a wide variety of robotic skills, but learning each new skill requires considerable real-world data collection and manual representation engineering to design policy classes or features. Using deep reinforcement learning to train general purpose neural network policies alleviates some of the burden of manual representation engineering by using expressive policy classes, but exacerbates the challenge of data collection, since such methods tend to be less efficient than RL with low-dimensional, hand-designed representations. Transfer learning can mitigate this problem by enabling us to transfer information from one skill to another and even from one robot to another. We show that neural network policies can be decomposed into "task-specific" and "robot-specific" modules, where the task-specific modules are shared across robots, and the robot-specific modules are shared across all tasks on that robot. This allows for sharing task information, such as perception, between robots and sharing robot information, such as dynamics and kinematics, between tasks. We exploit this decomposition to train mix-and-match modules that can solve new robot-task combinations that were not seen during training. Using a novel neural network architecture, we demonstrate the effectiveness of our transfer method for enabling zero-shot generalization with a variety of robots and tasks in simulation for both visual and non-visual tasks.

Journal ArticleDOI
TL;DR: This paper develops an algorithm, based on a multi-agent Markov decision process representation of the task allocation problem and shows that it outperforms standard baseline solutions and integrates it into a planning agent that responds to requests for tasks from participants in a mixed-reality location-based game that simulates disaster response settings in the real-world.
Abstract: In the aftermath of major disasters, first responders are typically overwhelmed with large numbers of, spatially distributed, search and rescue tasks, each with their own requirements. Moreover, responders have to operate in highly uncertain and dynamic environments where new tasks may appear and hazards may be spreading across the disaster space. Hence, rescue missions may need to be re-planned as new information comes in, tasks are completed, or new hazards are discovered. Finding an optimal allocation of resources to complete all the tasks is a major computational challenge. In this paper, we use decision theoretic techniques to solve the task allocation problem posed by emergency response planning and then deploy our solution as part of an agent-based planning tool in real-world field trials. By so doing, we are able to study the interactional issues that arise when humans are guided by an agent. Specifically, we develop an algorithm, based on a multi-agent Markov decision process representation of the task allocation problem and show that it outperforms standard baseline solutions. We then integrate the algorithm into a planning agent that responds to requests for tasks from participants in a mixed-reality location-based game, called AtomicOrchid, that simulates disaster response settings in the real-world. We then run a number of trials of our planning agent and compare it against a purely human driven system. Our analysis of these trials show that human commanders adapt to the planning agent by taking on a more supervisory role and that, by providing humans with the flexibility of requesting plans from the agent, allows them to perform more tasks more efficiently than using purely human interactions to allocate tasks. We also discuss how such flexibility could lead to poor performance if left unchecked.

Patent
Stefan Nusser1, Ethan Rublee1, Troy Straszheim1, Kevin William Watts1, John Zevenbergen1 
05 Oct 2016
TL;DR: In this paper, a priority queue of requests for remote assistance associated with the identified tasks may be determined based on expected times at which the robotic manipulator will perform the specified tasks, and at least one remote assistor device may then be requested, according to the priority queue, to provide remote assistance with those identified tasks.
Abstract: Methods and systems for distributing remote assistance to facilitate robotic object manipulation are provided herein. Regions of a model of objects in an environment of a robotic manipulator may be determined, where each region corresponds to a different subset of objects with which the robotic manipulator is configured to perform a respective task. Certain tasks may be identified, and a priority queue of requests for remote assistance associated with the identified tasks may be determined based on expected times at which the robotic manipulator will perform the identified tasks. At least one remote assistor device may then be requested, according to the priority queue, to provide remote assistance with the identified tasks. The robotic manipulator may then be caused to perform the identified tasks based on responses to the requesting, received from the at least one remote assistor device, that indicate how to perform the identified tasks.

Posted Content
TL;DR: Evaluation on person attributes classification tasks involving facial and clothing attributes suggests that the models produced by the proposed method are fast, compact and can closely match or exceed the state-of-the-art accuracy from strong baselines by much more expensive models.
Abstract: Multi-task learning aims to improve generalization performance of multiple prediction tasks by appropriately sharing relevant information across them. In the context of deep neural networks, this idea is often realized by hand-designed network architectures with layers that are shared across tasks and branches that encode task-specific features. However, the space of possible multi-task deep architectures is combinatorially large and often the final architecture is arrived at by manual exploration of this space subject to designer's bias, which can be both error-prone and tedious. In this work, we propose a principled approach for designing compact multi-task deep learning architectures. Our approach starts with a thin network and dynamically widens it in a greedy manner during training using a novel criterion that promotes grouping of similar tasks together. Our Extensive evaluation on person attributes classification tasks involving facial and clothing attributes suggests that the models produced by the proposed method are fast, compact and can closely match or exceed the state-of-the-art accuracy from strong baselines by much more expensive models.

Journal ArticleDOI
TL;DR: A task scheduling algorithm based on Genetic Algorithm has been introduced for allocating and executing an application’s tasks to minimize the completion time and cost of tasks, and maximize resource utilization.
Abstract: Nowadays, Cloud computing is widely used in companies and enterprises. However, there are some challenges in using Cloud computing. The main challenge is resource management, where Cloud computing provides IT resources (e.g., CPU, Memory, Network, Storage, etc.) based on virtualization concept and pay-as-you-go principle. The management of these resources has been a topic of much research. In this paper, a task scheduling algorithm based on Genetic Algorithm (GA) has been introduced for allocating and executing an application’s tasks. The aim of this proposed algorithm is to minimize the completion time and cost of tasks, and maximize resource utilization. The performance of this proposed algorithm has been evaluated using CloudSim toolkit.

Journal ArticleDOI
TL;DR: The study of queue design on worker productivity in service systems that involve human servers finds that the single-queue structure slows down the servers, illustrating a drawback of pooling; and poor visibility of the queue length slowsdown the servers; however this effect may be mitigated, or even reversed, by pay schemes that incentivize the servers for fast performance.
Abstract: Using behavioral experiments, we study the impact of queue design on worker productivity in service systems that involve human servers. Specifically, we consider two queue design features: queue structure, which can either be parallel queues (multiple queues with a dedicated server per queue) or a single queue (a pooled queue served by multiple servers); and queue-length visibility, which can provide either full or blocked visibility. We find that 1) the single-queue structure slows down the servers, illustrating a drawback of pooling; and 2) poor visibility of the queue length slows down the servers; however, this effect may be mitigated, or even reversed, by pay schemes that incentivize the servers for fast performance. We provide additional managerial insights by isolating two behavioral drivers behind these results – task interdependence and saliency of feedback.

Proceedings ArticleDOI
11 Apr 2016
TL;DR: An extension of the busy-window analysis suitable for such task chains in static-priority preemptive systems is presented and evaluated in a compositional performance analysis using synthetic test cases and a realistic automotive use case showing far tighter response-time bounds than current approaches.
Abstract: When modelling software components for timing analysis, we typically encounter functional chains of tasks that lead to precedence relations. As these task chains represent a functionally-dependent sequence of operations, in real-time systems, there is usually a requirement for their end-to-end latency. When mapped to software components, functional chains often result in communicating threads. Since threads are scheduled rather than tasks, specific task chain properties arise that can be exploited for response-time analysis. As a core contribution, this paper presents an extension of the busy-window analysis suitable for such task chains in static-priority preemptive systems. We evaluated the extended busy-window analysis in a compositional performance analysis using synthetic test cases and a realistic automotive use case showing far tighter response-time bounds than current approaches.

Proceedings ArticleDOI
10 Apr 2016
TL;DR: This paper establishes a theoretical foundation by formally defining a task allocation problem for distributed stream processing, which is proved to be NP-hard, and proposes an approximation algorithm for the class of series-parallel decomposable graphs, which captures a broad range of common stream processing applications.
Abstract: There is a growing demand for live, on-the-fly processing of increasingly large amounts of data. In order to ensure the timely and reliable processing of streaming data, a variety of distributed stream processing architectures and platforms have been developed, which handle the fundamental tasks of (dynamically) assigning processing tasks to the currently available physical resources and routing streaming data between these resources. However, while there are plenty of platforms offering such functionality, the theory behind it is not well understood. In particular, it is unclear how to best allocate the processing tasks to the given resources. In this paper, we establish a theoretical foundation by formally defining a task allocation problem for distributed stream processing, which we prove to be NP-hard. Furthermore, we propose an approximation algorithm for the class of series-parallel decomposable graphs, which captures a broad range of common stream processing applications. The algorithm achieves a constant-factor approximation under the assumptions that the number of resources scales at least logarithmically with the number of computational tasks and the computational cost of the tasks dominates the cost of communication.

Journal ArticleDOI
TL;DR: This paper studies a version of the spatial crowdsourcing problem in which the workers autonomously select their tasks, called the worker selected tasks (WST) mode, and proposes two exact algorithms based on dynamic programming and branch-and-bound strategies for small number of tasks.
Abstract: With the progress of mobile devices and wireless broadband, a new eMarket platform, termed spatial crowdsourcing is emerging, which enables workers (aka crowd) to perform a set of spatial tasks (i.e., tasks related to a geographical location and time) posted by a requester. In this paper, we study a version of the spatial crowdsourcing problem in which the workers autonomously select their tasks, called the worker selected tasks (WST) mode. Towards this end, given a worker, and a set of tasks each of which is associated with a location and an expiration time, we aim to find a schedule for the worker that maximizes the number of performed tasks. We first prove that this problem is NP-hard. Subsequently, for small number of tasks, we propose two exact algorithms based on dynamic programming and branch-and-bound strategies. Since the exact algorithms cannot scale for large number of tasks and/or limited amount of resources on mobile platforms, we propose different approximation algorithms. Finally, to strike a compromise between efficiency and accuracy, we present a progressive algorithms. We conducted a thorough experimental evaluation with both real-world and synthetic data on desktop and mobile platforms to compare the performance and accuracy of our proposed approaches.

Proceedings ArticleDOI
02 Nov 2016
TL;DR: In this paper, the authors introduce an online popularity prediction and tracking task for reinforcement learning with a combinatorial, natural language action space, where a specified number of discussion threads predicted to be popular are recommended, chosen from a fixed window of recent comments to track.
Abstract: We introduce an online popularity prediction and tracking task as a benchmark task for reinforcement learning with a combinatorial, natural language action space. A specified number of discussion threads predicted to be popular are recommended, chosen from a fixed window of recent comments to track. Novel deep reinforcement learning architectures are studied for effective modeling of the value function associated with actions comprised of interdependent sub-actions. The proposed model, which represents dependence between sub-actions through a bi-directional LSTM, gives the best performance across different experimental configurations and domains, and it also generalizes well with varying numbers of recommendation requests.

Journal ArticleDOI
TL;DR: A novel approach and algorithm with mathematical formula for obtaining the exact optimal number of task resources for any workload running on Hadoop MapReduce and it is shown that the currently well-known rules of thumb for calculating the required number of reduce tasks are inaccurate and could lead to significant waste of computing resources and energy with no further improvement in execution time.

Book ChapterDOI
19 Jul 2016
TL;DR: This work considers the task of embedding multiple service requests in Software-Defined Networks, i.e. computing (combined) mappings of network functions on physical nodes and finding routes to connect the mapped network functions.
Abstract: We consider the task of embedding multiple service requests in Software-Defined Networks (SDNs), i.e. computing (combined) mappings of network functions on physical nodes and finding routes to connect the mapped network functions. A single service request may either be fully embedded or be rejected. The objective is to maximize the sum of benefits of the served requests, while the solution must abide node and edge capacities.

Proceedings ArticleDOI
01 Dec 2016
TL;DR: In this paper, the authors proposed a principled multi-task learning (MTL) framework for distributed and asynchronous optimization to address the problem of high data volume and privacy in real-world machine learning applications.
Abstract: Many real-world machine learning applications involveseveral learning tasks which are inter-related. For example, in healthcare domain, we need to learn a predictive model of a certain disease for many hospitals. The models for each hospital may be different because of the inherent differences in the distributions of the patient populations. However, the models are also closely related because of the nature of the learning tasks modeling the same disease. By simultaneously learning all the tasks, multi-task learning (MTL) paradigm performs inductive knowledge transfer among tasks to improve the generalization performance. When datasets for the learning tasks are stored at different locations, it may not always be feasible to transfer the data to provide a data centralized computing environment due to various practical issues such as high data volume and privacy. In this paper, we propose a principled MTL framework for distributed and asynchronous optimization to address the aforementioned challenges. In our framework, gradient update does not wait for collecting the gradient information from all the tasks. Therefore, the proposed method is very efficient when the communication delay is too high for some task nodes. We show that many regularized MTL formulations can benefit from this framework, including the low-rank MTL for sharedsubspace learning. Empirical studies on both synthetic and realworld datasets demonstrate the efficiency and effectiveness of the proposed framework.

Patent
29 Sep 2016
TL;DR: In this article, a plurality of services to be accessed at the computing device is determined based on the intended task and the obtained contextual information, which is then transmitted to the device for execution at the device.
Abstract: Aspects of the subject technology relate to systems and methods for processing voice input data. Voice input data is received from a computing. An intended task is determined based on the received voice input data. Contextual information related to the intended task is obtained. A plurality of services to be accessed at the computing device is determined based on the intended task and the obtained contextual information. Instructions associated with the plurality of services are provided for transmission to the computing device for execution at the computing device.

Journal ArticleDOI
TL;DR: This work proposes a framework for classification with data with small numbers of samples, and extends the model for online multi-task learning where the model parameters are incrementally updated given new data or new tasks.
Abstract: Prognosis, such as predicting mortality, is common in medicine. When confronted with small numbers of samples, as in rare medical conditions, the task is challenging. We propose a framework for classification with data with small numbers of samples. Conceptually, our solution is a hybrid of multi-task and transfer learning, employing data samples from source tasks as in transfer learning, but considering all tasks together as in multi-task learning. Each task is modelled jointly with other related tasks by directly augmenting the data from other tasks. The degree of augmentation depends on the task relatedness and is estimated directly from the data. We apply the model on three diverse real-world data sets (healthcare data, handwritten digit data and face data) and show that our method outperforms several state-of-the-art multi-task learning baselines. We extend the model for online multi-task learning where the model parameters are incrementally updated given new data or new tasks. The novelty of our method lies in offering a hybrid multi-task/transfer learning model to exploit sharing across tasks at the data-level and joint parameter learning.

Journal ArticleDOI
TL;DR: This paper proposes an incentive framework based on Stackelberg game to model the interaction between the server and users and demonstrates that the proposed mechanisms have good performance and high computational efficiency in real world applications.
Abstract: In this paper, we tackle the problem of stimulating users to join mobile crowdsourcing applications with personal devices such as smartphones and tablets. Wireless personal networks facilitate to exploit the communication opportunity and makes diverse spare-resource of personal devices utilized. However, it is a challenge to motivate sufficient users to provide their resource of personal devices for achieving good quality of service. To address this problem, we propose an incentive framework based on Stackelberg game to model the interaction between the server and users. Traditional incentive mechanisms are applied for either single task or multiple dependent tasks, which fails to consider the interrelation among various tasks. In this paper, we focus on the common realistic scenario with multiple collaborative tasks, where each task requires a group of users to perform collaboratively. Specifically, participants would consider task priority and the server would design suitable reward functions to allocate the total payment. Considering the information of users' costs and the types of tasks, four incentive mechanisms are presented for various cases to the above problem, which are proved to have the Nash equilibrium solutions in all cases for maximizing the utility of the server. Moreover, online incentive mechanisms are further proposed for real time tasks. Through both rigid theoretical analysis and extensive simulations, we demonstrate that the proposed mechanisms have good performance and high computational efficiency in real world applications.

Book ChapterDOI
20 May 2016
TL;DR: Developing coordination control system with decision support system for sugar refinery using adaptive tuning that allows to find optimum controller’s parameters of the system for every technological mode of the plant.
Abstract: Complex technological plants work in different technological modes during its exploitation that causes changes of plants parameters. Changeable parameters of the plant may lead to changeable efficiency of automatic control systems or even to the abnormal functioning of the plant if such automatic control systems have static value of controller parameters. Controller’s all-mode tuning is usually used to avoid abnormal functioning of the automatic control system in different modes, but such strategy reduces profits. There are many ways to improve the quality of automatic control. The most common is to use adaptive tuning that allows to find optimum controller’s parameters of the system for every technological mode of the plant. Another approach to increase efficiency is to solve coordination task. Such approach can be used for plants with significant nonlinear connections. As example, the article describes developed coordination control system with decision support system for sugar refinery.