scispace - formally typeset
Search or ask a question
Journal Article•DOI•

Rete: a fast algorithm for the many pattern/many object pattern match problem

01 Sep 1982-Artificial Intelligence (Elsevier)-Vol. 19, Iss: 1, pp 17-37
TL;DR: The Rete Match Algorithm is an efficient method for comparing a large collection of patterns to a largeCollection of objects that finds all the objects that match each pattern.
About: This article is published in Artificial Intelligence.The article was published on 1982-09-01. It has received 2562 citations till now. The article focuses on the topics: Algorithm design & Rete algorithm.
Citations
More filters
Journal Article•DOI•
TL;DR: This work gives efficient algorithms for the discovery of all frequent episodes from a given class of episodes, and presents detailed experimental results that are in use in telecommunication alarm management.
Abstract: Sequences of events describing the behavior and actions of users or systems can be collected in several domains. An episode is a collection of events that occur relatively close to each other in a given partial order. We consider the problem of discovering frequently occurring episodes in a sequence. Once such episodes are known, one can produce rules for describing or predicting the behavior of the sequence. We give efficient algorithms for the discovery of all frequent episodes from a given class of episodes, and present detailed experimental results. The methods are in use in telecommunication alarm management.

1,593 citations

Journal Article•DOI•
TL;DR: The paper presents an online model-free learning algorithm, MAXQ-Q, and proves that it converges with probability 1 to a kind of locally-optimal policy known as a recursively optimal policy, even in the presence of the five kinds of state abstraction.
Abstract: This paper presents a new approach to hierarchical reinforcement learning based on decomposing the target Markov decision process (MDP) into a hierarchy of smaller MDPs and decomposing the value function of the target MDP into an additive combination of the value functions of the smaller MDPs. The decomposition, known as the MAXQ decomposition, has both a procedural semantics--as a subroutine hierarchy--and a declarative semantics--as a representation of the value function of a hierarchical policy. MAXQ unifies and extends previous work on hierarchical reinforcement learning by Singh, Kaelbling, and Dayan and Hinton. It is based on the assumption that the programmer can identify useful subgoals and define subtasks that achieve these subgoals. By defining such subgoals, the programmer constrains the set of policies that need to be considered during reinforcement learning. The MAXQ value function decomposition can represent the value function of any policy that is consistent with the given hierarchy. The decomposition also creates opportunities to exploit state abstractions, so that individual MDPs within the hierarchy can ignore large parts of the state space. This is important for the practical application of the method. This paper defines the MAXQ hierarchy, proves formal results on its representational power, and establishes five conditions for the safe use of state abstractions. The paper presents an online model-free learning algorithm, MAXQ-Q, and proves that it converges with probability 1 to a kind of locally-optimal policy known as a recursively optimal policy, even in the presence of the five kinds of state abstraction. The paper evaluates the MAXQ representation and MAXQ-Q through a series of experiments in three domains and shows experimentally that MAXQ-Q (with state abstractions) converges to a recursively optimal policy much faster than flat Q learning. The fact that MAXQ learns a representation of the value function has an important benefit: it makes it possible to compute and execute an improved, non-hierarchical policy via a procedure similar to the policy improvement step of policy iteration. The paper demonstrates the effectiveness of this nonhierarchical execution experimentally. Finally, the paper concludes with a comparison to related work and a discussion of the design tradeoffs in hierarchical reinforcement learning.

1,486 citations


Cites methods from "Rete: a fast algorithm for the many..."

  • ...It should be possible to develop an e cient bottom-up method similar to the RETE algorithm (and its successors) that is used in the SOAR architecture (Forgy, 1982; Tambe & Rosenbloom, 1994)....

    [...]

Book•
17 Aug 2007
TL;DR: This chapter discusses cognitive architecture, which describes the architecture of the mind and the role that language plays in the development of thought.
Abstract: 1. Cognitive Architecture 2. The Modular Organization of the Mind 3. Human Associative Memory 4. The Adaptive Control of Thought 5. What Does It Take to Be Human? Lessons From High School Algebra 6. How Can the Human Mind Occur?

1,210 citations

Book•
Nils J. Nilsson1•
15 Aug 1997
TL;DR: Intelligent agents are employed as the central characters in this new introductory text and Nilsson gradually increases their cognitive horsepower to illustrate the most important and lasting ideas in AI.
Abstract: Intelligent agents are employed as the central characters in this new introductory text. Beginning with elementary reactive agents, Nilsson gradually increases their cognitive horsepower to illustrate the most important and lasting ideas in AI. Neural networks, genetic programming, computer vision, heuristic search, knowledge representation and reasoning, Bayes networks, planning, and language understanding are each revealed through the growing capabilities of these agents. The book provides a refreshing and motivating new synthesis of the field by one of AI's master expositors and leading researchers. Artificial Intelligence: A New Synthesis takes the reader on a complete tour of this intriguing new world of AI. * An evolutionary approach provides a unifying theme * Thorough coverage of important AI ideas, old and new * Frequent use of examples and illustrative diagrams * Extensive coverage of machine learning methods throughout the text * Citations to over 500 references * Comprehensive index Table of Contents 1 Introduction 2 Stimulus-Response Agents 3 Neural Networks 4 Machine Evolution 5 State Machines 6 Robot Vision 7 Agents that Plan 8 Uninformed Search 9 Heuristic Search 10 Planning, Acting, and Learning 11 Alternative Search Formulations and Applications 12 Adversarial Search 13 The Propositional Calculus 14 Resolution in The Propositional Calculus 15 The Predicate Calculus 16 Resolution in the Predicate Calculus 17 Knowledge-Based Systems 18 Representing Commonsense Knowledge 19 Reasoning with Uncertain Information 20 Learning and Acting with Bayes Nets 21 The Situation Calculus 22 Planning 23 Multiple Agents 24 Communication Among Agents 25 Agent Architectures

1,090 citations

Journal Article•DOI•
TL;DR: R1 is a program that configures VAX-11/780 computer systems and uses Match as its principal problem solving method; it has sufficient knowledge of the configuration domain and of the peculiarities of the various configuration constraints that at each step in the configuration process, it simply recognizes what to do.

1,001 citations

References
More filters
Book•
01 Jan 1974
TL;DR: This text introduces the basic data structures and programming techniques often used in efficient algorithms, and covers use of lists, push-down stacks, queues, trees, and graphs.
Abstract: From the Publisher: With this text, you gain an understanding of the fundamental concepts of algorithms, the very heart of computer science. It introduces the basic data structures and programming techniques often used in efficient algorithms. Covers use of lists, push-down stacks, queues, trees, and graphs. Later chapters go into sorting, searching and graphing algorithms, the string-matching algorithms, and the Schonhage-Strassen integer-multiplication algorithm. Provides numerous graded exercises at the end of each chapter. 0201000296B04062001

9,262 citations

Report•DOI•
01 Jul 1981
TL;DR: This is a combination introductory and reference manual for OPS5, a programming language for production systems used primarily for applications in the areas of artificial intelligence, cognitive psychology, and expert systems.
Abstract: : This is a combination introductory and reference manual for OPS5, a programming language for production systems. OPS5 is used primarily for applications in the areas of artificial intelligence, cognitive psychology, and expert systems. OPS5 interpreters have been implemented in LISP and BLISS.

454 citations

01 Jan 1979

250 citations


"Rete: a fast algorithm for the many..." refers background in this paper

  • ...) A 1979 paper [5] discussed simple but very fast interpreters for the networks....

    [...]

  • ...It should be noted that all the complexity results in Table 1 sharp; production systems achieving the bounds are described in [5]....

    [...]

  • ...The proofs and detailed results of some empirical studies can be found in [5]....

    [...]

  • ...Some systems have been observed to spend more than ninetenths of their total run time performing this kind of pattern matching [5]....

    [...]

01 Jan 1976
TL;DR: This chapter describes a production system for EPAM, featuring the automatic addition of productions by the basic system to represent incremental learning of three-letter nonsense syllables.
Abstract: : EPAM is a simple model of verbal learning that was developed to simulate certain features of human learning, but it has also turned out to be useful for certain kinds of discriminations in Al programs. This chapter describes a production system for EPAM, featuring the automatic addition of productions by the basic system to represent incremental learning of three-letter nonsense syllables. The design of the network represented by the added productions is discussed and its growth described. Details of the EPAM production system raise several issues with respect to general EPAM variations and with respect to production system issues such as the right set of production-building primitives. A comparison of the present program to a similar one by Waterman, using a radically different production system architecture, is carried out, highlighting the advantages of the present one. (Author)

52 citations


"Rete: a fast algorithm for the many..." refers methods in this paper

  • ...Interpreters using this scheme--in some cases combined with other efficiency measures-have been described by McCracken [8], McDermott, Newell, and Moore [9], and Rychener [10]....

    [...]

Book Chapter•DOI•
01 Jan 1978
TL;DR: In this article, a production system architecture is augmented with a mechanism that enables knowledge of the degree to which each production is currently satisfied to be maintained across cycles, then the dependency on the size of working memory can be eliminated as well.
Abstract: The obvious method of determining which productions are satisfied on a given cycle involves matching productions, one at a time, against the contents of working memory. The cost of this processing is essentially linear in the product of the number of productions in production memory and the number of assertions in working memory. By augmenting a production system architecture with a mechanism that enables knowledge of similarities among productions to be precomputed and then exploited during a run, it is possible to eliminate the dependency on the size of production memory. If in addition, the architecture is augmented with a mechanism that enables knowledge of the degree to which each production is currently satisfied to be maintained across cycles, then the dependency on the size of working memory can be eliminated as well. After a particular production system architecture, PSG, is described, two sets of mechanisms that increase its efficiency are presented. To determine their effectiveness, two augmented versions of PSG are compared experimentally with each other and with the original version.

43 citations