scispace - formally typeset
Search or ask a question
Topic

Task (computing)

About: Task (computing) is a research topic. Over the lifetime, 9718 publications have been published within this topic receiving 129364 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: An efficient, recursive, backtracking-based way of implementing OOPS on realistic computers with limited storage is introduced, and experiments illustrate how OOPS can greatly profit from metalearning or metasearching, that is, searching for faster search procedures.
Abstract: We present a novel, general, optimally fast, incremental way of searching for a universal algorithm that solves each task in a sequence of tasks. The Optimal Ordered Problem Solver (OOPS) continually organizes and exploits previously found solutions to earlier tasks, efficiently searching not only the space of domain-specific algorithms, but also the space of search algorithms. Essentially we extend the principles of optimal nonincremental universal search to build an incremental universal learner that is able to improve itself through experience. The initial bias is embodied by a task-dependent probability distribution on possible program prefixes. Prefixes are self-delimiting and executed in online fashion while being generated. They compute the probabilities of their own possible continuations. Let p^n denote a found prefix solving the first n tasks. It may exploit previously stored solutions p^i, i >n, by calling them as subprograms, or by copying them and editing the copies before applying them. We provide equal resources for two searches that run in parallel until p^{n+1} is discovered and stored. The first search is exhaustive; it systematically tests all possible prefixes on all tasks up to n+1. The second search is much more focused; it only searches for prefixes that start with p^n, and only tests them on task n+1, which is safe, because we already know that such prefixes solve all tasks up to n. Both searches are depth-first and bias-optimal: the branches of the search trees are program prefixes, and backtracking is triggered once the sum of the runtimes of the current prefix on all current tasks exceeds the prefix probability multiplied by the total search time so far. In illustrative experiments, our self-improver becomes the first general system that learns to solve all n disk Towers of Hanoi tasks (solution size 2^n-1) for n up to 30, profiting from previously solved, simpler tasks involving samples of a simple context free language.

173 citations

Proceedings ArticleDOI
05 Jul 2008
TL;DR: A novel algorithm is introduced that transfers samples from the source tasks that are mostly similar to the target task, and is empirically show that, following the proposed approach, the transfer of samples is effective in reducing the learning complexity.
Abstract: The main objective of transfer in reinforcement learning is to reduce the complexity of learning the solution of a target task by effectively reusing the knowledge retained from solving a set of source tasks. In this paper, we introduce a novel algorithm that transfers samples (i.e., tuples 〈s, a, s', r〉) from source to target tasks. Under the assumption that tasks have similar transition models and reward functions, we propose a method to select samples from the source tasks that are mostly similar to the target task, and, then, to use them as input for batch reinforcement-learning algorithms. As a result, the number of samples an agent needs to collect from the target task to learn its solution is reduced. We empirically show that, following the proposed approach, the transfer of samples is effective in reducing the learning complexity, even when some source tasks are significantly different from the target task.

173 citations

Book
01 Oct 1997
TL;DR: In this article, the authors describe the evolution of the design process in a factory environment, including the use of technology to accelerate information flow and reduce delays in the process of designing a product.
Abstract: INTRODUCTION Revolution in the Factory Into the Witch Doctor's Tent There Are No Best Practices Where Ideas Come From The Organization of This Book PART ONE: THE DESIGN FACTORY 1. INTO THE DESIGN FACTORY Our Goals Are Economic Products vs. Designs Design-in-Process Inventory Rising Cost of Change Late-Breaking News One-Time Processes Expanding Work Summary PART TWO: THINKING TOOLS 2. MAKING PROFITS NOT PRODUCTS Project Models Application Models Models of Process Economics Tactical vs. Strategic Decisions Some Practical Tips Summary 3.ENTERING THE LAND OF QUEUES An Introduction to Queueing Theory The Economics of Queues Depicting Queues Implications of Queuing Theory Dealing with Queues Increasing Capacity / Managing Demand / Reducing Variability / Using Control Systems The Location of Batch Queues Little's Law Typical Queues Summary 4. IT'S ALL ABOUT INFORMATION Information Theory Efficient Generation of Information Maximizing Information: The Magic Number 50 Percent Information Differs in Value Timing: Earlier Is Better / Batch Size Affects Timing / Iterations Generate Early Information / The Potential Profit Impact Do It Right the First Time? Communicating Failures Protecting Against Failure Task Sequencing Monitoring Summary 5. JUST ADD FEEDBACK Systems Theory Systems with Feedback Properties of Systems with Feedback Difficulty in Troubleshooting / Instability and Chaos / Accuracy and Feedback / Variability Within a System More Complex Control Systems Summary PART THREE: ACTION TOOLS 6. CHOOSE THE RIGHT ORGANIZATION The Organization as a System Assessing Organizational Forms Efficiency: The Functional Organization Speed: The Autonomous Team Performance and Cost: Hybrid Organizations Dividing Responsibilities Communications Old Communications Tools / New Communications Technologies Colocation Summary 7. DESIGN THE DESIGN PROCESS Combining Structure and Freedom One-Time Processes / Modular Processes / A Pattern Language Designing Process Stages Input Subprocesses / Technology vs. Product Development / Controlling Queues / Subprocess Design / Output Processes Key Design Principles Sequential vs. Concurrent Processes / Managing Information Profiles / Decentralizing Control and Feedback / Location of Batch Queues Specific Process Implementations Evolving the Process Summary 8. PRODUCT ARCHITECTURE: THE INVISIBLE DESIGN Underlying Principles Modularity Segregating Variability/ Interface Management Specific Architectural Implementations Low-Expense Architectures / Low-Cost Architectures / High-Performance Architectures / Fast-Development Architectures Who Does It? Summary 9. GET THE PRODUCT SPECIFICATION RIGHT It Starts with Strategy Selecting the Customer Understanding the Customer Customer Interviews / Meticulous Observation / Focus Groups Creating a Good Specification The Minimalist Specification / A Product Mission / The Specification Process Using the Specification Specific Implementations Summary 10. USE THE RIGHT TOOLS The Use of Technology Accelerated Information Flow / Improved Productivity / Reduced Delays Implementation Principles Technology Changes Process / Pay Attention to Economics Technologies Design Automation / Prototyping and Testing / Communications / Information Storage and Retrieval Summary 11. MEASURE THE RIGHT THINGS General Principles Drive Metrics from Economics / The Control Triangle / Decentralizing Control / Selecting Metrics Project-Level Controls Expense-Focused Controls / Cost-Focused Controls / Performance-Focused Controls / Speed-Focused Controls Business Level Controls Expense-Focused Controls / Cost-Focused Controls / Performance-Focused Controls / Speed-Focused Controls Summary 12. MANAGE UNCERTAINTY AND RISK Market and Technical Risk Managing Market Risk Use a Substitute Product / Simulate the Risky Attribute / Make the Design Flexible / Move Fast Managing Technical Risk Controlling Subsystem Risk / Controlling System Integration Risk / Back-up Plans World-Class Testing Cheap Testing / Low Unit Cost Impact/Maximizing Performance / Fast Testing / Continuous Improvement Summary PART FOUR: NEXT STEPS 13. NOW WHAT DO I DO? Do Your Math Use Decision Rules Pay Attention to Capacity Utilization Pay Attention to Batch Size Respect Variability Think Clearly About Risk Think Systems Respect the People Design the Process Thoughtfully Pay Attention to Architecture Deeply Understand the Customer Eliminate Useless Controls Get to the Front Lines Avoid Slogans Selected Bibliography Index About the Author

172 citations

Patent
24 Dec 2002
TL;DR: In this paper, a disk drive is disclosed for executing a preemptive multitasking operating system comprising tasks of varying priority, including a disk task for processing disk commands by initiating seek operations and configuring parameters of a read/write channel, a host task for initiating disk commands in response to host commands received from a host computer, a background task to perform background operations including a defect scan of the disk, and an execution task for arbitrating the disk commands generated by the host task and the background task and for transmitting the arbitrated disk commands to the disk task.
Abstract: A disk drive is disclosed for executing a preemptive multitasking operating system comprising tasks of varying priority, including a disk task for processing disk commands by initiating seek operations and configuring parameters of a read/write channel, a host task for initiating disk commands in response to host commands received from a host computer, a background task for initiating disk commands to perform background operations including a defect scan of the disk, and an execution task for arbitrating the disk commands generated by the host task and the background task and for transmitting the arbitrated disk commands to the disk task.

171 citations

Patent
02 Jul 1996
TL;DR: In this paper, a lock manager decomposes the single spin lock traditionally employed to protect shared, global Lock Manager structures into multiple spin locks, each protecting individual hash buckets or groups of hash buckets which index into particular members of those structures.
Abstract: Database system and methods are described for improving scalability of multi-user database systems by improving management of locks used in the system. The system provides multiple server engines, with each engine having a Parallel Lock Manager. More particularly, the Lock Manager decomposes the single spin lock traditionally employed to protect shared, global Lock Manager structures into multiple spin locks, each protecting individual hash buckets or groups of hash buckets which index into particular members of those structures. In this manner, contention for shared, global Lock Manager data structures is reduced, thereby improving the system's scalability. Further, improved "deadlock" searching methodology is provided. Specifically, the system provides a "deferred" mode of deadlock detection. Here, a task simply goes to sleep on a lock; it does not initiate a deadlock search. At a later point in time, the task is awakened to carry out the deadlock search. Often, however, a task can be awakened with the requested lock being granted. In this manner, the "deferred" mode of deadlock detection allows the system to avoid deadlock detection for locks which are soon granted.

169 citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202210
2021695
2020712
2019784
2018721
2017565