scispace - formally typeset
Search or ask a question
Author

David Poole

Bio: David Poole is an academic researcher from University of British Columbia. The author has contributed to research in topics: Probabilistic logic & Bayesian network. The author has an hindex of 45, co-authored 228 publications receiving 11736 citations. Previous affiliations of David Poole include Canadian Institute for Advanced Research & University of Waterloo.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper proposes a qualitative graphical representation of preferences that reflects conditional dependence and independence of preference statements under a ceteris paribus (all else being equal) interpretation, and provides a formal semantics for this model.
Abstract: Information about user preferences plays a key role in automated decision making. In many domains it is desirable to assess such preferences in a qualitative rather than quantitative way. In this paper, we propose a qualitative graphical representation of preferences that reflects conditional dependence and independence of preference statements under a ceteris paribus (all else being equal) interpretation. Such a representation is often compact and arguably quite natural in many circumstances. We provide a formal semantics for this model, and describe how the structure of the network can be exploited in several inference tasks, such as determining whether one outcome dominates (is preferred to) another, ordering a set outcomes according to the preference relation, and constructing the best outcome subject to available evidence.

958 citations

Journal ArticleDOI
TL;DR: A simple logical framework for default reasoning by treating defaults as predefined possible hypotheses is presented, and it is shown how this idea subsumes the intuition behind Reiter's default logic.

790 citations

Journal ArticleDOI
TL;DR: It is shown how any probabilistic knowledge representable in a discrete Bayesian belief network can be represented in this framework, and it is argued that it is better to invent new hypotheses to explain dependence rather than having to worry about dependence in the language.

648 citations

Book
08 Jan 1998
TL;DR: This chapter discusses representation and Reasoning systems, Robotic Systems, and the Uses of Agent models as well as some more Implemented Systems.
Abstract: Preface 1.1 What is Computational Intelligence? 1.2 Agents in the World 1.3 Representation and Reasoning 1.4 Applications 1.5 Overview 1.6 References and Further Reading 1.7 Exercises 2.1 Introduction 2.2 Representation and Reasoning Systems 2.3 Simplifying assumptions of the initial RRS 2.4 Datalog 2.5 Semantics 2.6 Questions and Answers 2.7 Proofs 2.8 Extending the Language with Functional Symbols 2.9 References and Further Reading 2.10 Exercises 3.1 Introduction 3.2 Case Study: House Wiring 3.3 Discussion 3.5 Case-Study: Repesenting Abstract Concepts 3.6 Applications in Natural Language Processing 3.7 References and Further Reading 3.8 Exercises 4.1 Why Search? 4.2 Graph Searching 4.3 A Generic Searching Algorithm 4.4 Blind Search Strategies 4.5 Heuristic Search 4.6 Refinements to Search Strategies 4.7 Constraint Satisfaction Problems 4.8 References and Further Reading 4.9 Exercises 5.1 Introduction 5.2 Defining a solution 5.3 Choosing a Representation Language 5.4 Mapping a problem to representation 5.5 Choosing an inference procedure 5.6 References and Further Reading 5.7 Exercises 6.1 Introduction 6.2 Knowledge-Based System Architecture 6.3 Meta-Interpreters 6.4 Querying the User 6.5 Explanation 6.6 Debugging Knowledge Bases 6.7 A Meta-Interpreter with Search 6.8 Unification 6.9 References and Further Reading 6.10 Exercises 7.1 Equality 7.2 Integrity Constraints 7.3 Complete Knowledge Assumption 7.4 Disjunctive Knowledge 7.5 Explicit Quantification 7.6 First-order predicate calculus 7.7 Modal Logic 7.8 References and Further Reading 7.9 Exercises 8.1 Introduction 8.2 Representations of Actions and Change 8.3 Reasoning with World Representations 8.4 References and Further Reading 8.5 Exercises 9.1 Introduction 9.2 An Assumption-Based Reasoning Framework 9.3 Default Reasoning 9.4 Abduction 9.5 Evidential and Causal Reasoning 9.6 Algorithms for Assumption-based Reasoning 9.7 References and Further Reading 9.8 Exercises 10.1 Introduction 10.2 Probability 10.3 Independence Assumptions 10.4 Making Decisions Under Uncertainty 10.5 References and Further Reading 10.6 Exercises 11.1 Introduction 11.2 Learning as choosing the best representation 11.3 Case-based reasoning 11.4 Learning as refining the hypothesis space 11.5 Learning Under Uncertainty 11.6 Explanation-based Learning 11.7 References and Further Reading 11.8 Exercises 12.1 Introduction 12.2 Robotic Systems 12.3 The Agent function 12.4 Designing Robots 12.5 Uses of Agent models 12.6 Robot Architectures 12.7 Implementing a Controller 12.8 Robots Modelling the World 12.9 Reasoning in Situated Robots 12.10 References and Further Reading 12.11 Exercises Appendices A Glossary B The Prolog Programming Language B.1 Introduction B.2 Interacting with Prolog B.3 Syntax B.5 Database Relations B.6 Returning All Answers B.7 Input and Output B.8 Controlling Search C.Some more Implemented Systems C.1 Bottom-Up Interpreters C.2 Top-down Interpreters C.3 A Constraint Satisfaction Problem Solver C.4 Neural Network Learner C.5 Partial-Order Planner C.6 Implementing Belief Networks C.7 Robot Controller

604 citations

Proceedings Article
09 Aug 2003
TL;DR: This paper presents an algorithm to reason about multiple individuals, where the authors may know particular facts about some of them, but want to treat the others as a group.
Abstract: There have been many proposals for first-order belief networks (i.e., where we quantify over individuals) but these typically only let us reason about the individuals that we know about. There are many instances where we have to quantify over all of the individuals in a population. When we do this the population size often matters and we need to reason about all of the members of the population (but not necessarily individually). This paper presents an algorithm to reason about multiple individuals, where we may know particular facts about some of them, but want to treat the others as a group. Combining unification with variable elimination lets us reason about classes of individuals without needing to ground out the theory.

514 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Book
24 Aug 2012
TL;DR: This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Abstract: Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package--PMTK (probabilistic modeling toolkit)--that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.

8,059 citations

Journal ArticleDOI
TL;DR: By showing that argumentation can be viewed as a special form of logic programming with negation as failure, this paper introduces a general logic-programming-based method for generating meta-interpreters for argumentation systems, a method very much similar to the compiler-compiler idea in conventional programming.

4,386 citations