scispace - formally typeset
Search or ask a question
Author

Peter M. Todd

Bio: Peter M. Todd is an academic researcher from Indiana University. The author has contributed to research in topics: Heuristics & Ecological rationality. The author has an hindex of 52, co-authored 252 publications receiving 15769 citations. Previous affiliations of Peter M. Todd include Rowland Institute for Science & Max Planck Society.


Papers
More filters
Book
01 Jan 1999
TL;DR: Fast and frugal heuristics as discussed by the authors are simple rules for making decisions with realistic mental resources and can enable both living organisms and artificial systems to make smart choices, classifications, and predictions by employing bounded rationality.
Abstract: Fast and frugal heuristics - simple rules for making decisions with realistic mental resources - are presented here. These heuristics can enable both living organisms and artificial systems to make smart choices, classifications, and predictions by employing bounded rationality. But when and how can such fast and frugal heuristics work? What heuristics are in the mind's adaptive toolbox, and what building blocks compose them? Can judgments based simply on a single reason be as accurate as those based on many reasons? Could less knowledge even lead to systematically better predictions than more knowledge? This book explores these questions by developing computational models of heuristics and testing them through experiments and analysis. It shows how fast and frugal heuristics can yield adaptive decisions in situations as varied as choosing a mate, dividing resources among offspring, predicting high school drop-out rates, and playing the stock market.

4,384 citations

Proceedings Article
01 Jun 1989
TL;DR: This semester, the RoboCup Keepaway Machine Learning testbed provided an excellent environment to train the authors' agents, but still needed to scale down the problem in order to do a feasibility study.
Abstract: RoboCup has come a long way since it’s creation in ’97 [1] and is a respected place for machine learning researchers to try out new algorithms in a competitive fashion. RoboCup is now an international competition that draws many teams and respected researchers looking for a chance to create the best team. Originally we set out to create a team to compete in RoboCup. This was an ambitious project, and we had hopes to finish within the next year. For this semester, we chose to scale down the RoboCup team towards a smaller research area to try our learning algorithm on. The scaled down version of the RoboCup soccer environment is known as the ”Keepaway Testbed” and was started by Peter Stone, University of Texas [2]. Here the task is simple, you have two teams on the field each with the same number of players. Instead of trying to score a goal on the opponent the teams are given tasks, and one team is labeled the keepers and the other is labeled the takers. It is the task of the keepers to maintain possesion of the ball and it is the task of the takers to take the ball. The longer the keepers are able to maintain possesion of the ball the better the team. There are several advantages to this environment. First, it provides some of the essential characteristics of a real soccer game. Typically it is believed that if a team is able to maintain possesion of the ball for long periods of time they will win the match. Secondly, it provides realistic behavior much the same as the original RoboCup server. This is accomplished by introducing noise into the system similar to the original RoboCup, and similar to what would be received by real robots. Finally, when you want to go through the learning process this environment is capable of stopping play once the takers have touched the ball, and the environment is capable of starting a new trial based on that occurrence. Although the RoboCup Keepaway Machine Learning testbed provided an excellent environment to train our agents, we still needed to scale down the problem in order to do a feasibility study. Based on the Keepaway testbed, we created a simulation world with one simple task. One agent is placed into the world and has to locate the position of the goal. This can be thought of as an agent in a soccer environment needing to locate either the ball or another teammate. It was in this environment where we tested our methods for learning autonomous agents.

861 citations

Journal ArticleDOI
TL;DR: The choice overload hypothesis states that an increase in the number of options to choose from may lead to adverse consequences such as a decrease in the motivation to choose or the satisfaction with the finally chosen option as discussed by the authors.
Abstract: The choice overload hypothesis states that an increase in the number of options to choose from may lead to adverse consequences such as a decrease in the motivation to choose or the satisfaction with the finally chosen option. A number of studies found strong instances of choice overload in the lab and in the field, but others found no such effects or found that more choices may instead facilitate choice and increase satisfaction. In a meta‐analysis of 63 conditions from 50 published and unpublished experiments (N = 5,036), we found a mean effect size of virtually zero but considerable variance between studies. While further analyses indicated several potentially important preconditions for choice overload, no sufficient conditions could be identified. However, some idiosyncratic moderators proposed in single studies may still explain when and why choice overload reliably occurs; we review these studies and identify possible directions for future research.

800 citations

Journal ArticleDOI
TL;DR: It is shown how simple building blocks that control information search, stop search, and make decisions can be put together to form classes of heuristics, including: ignorance-based and one-reason decision making for choice, elimination models for categorization, and satisficing heuristic for sequential search.
Abstract: How can anyone be rational in a world where knowledge is limited, time is pressing, and deep thought is often an unattain- able luxury? Traditional models of unbounded rationality and optimization in cognitive science, economics, and animal behavior have tended to view decision-makers as possessing supernatural powers of reason, limitless knowledge, and endless time But understanding decisions in the real world requires a more psychologically plausible notion of bounded rationality In Simple heuristics that make us smart (Gigerenzer et al 1999), we explore fast and frugal heuristics - simple rules in the mind's adaptive toolbox for making decisions with realistic mental resources These heuristics can enable both living organisms and artificial systems to make smart choices quickly and with a minimum of information by exploiting the way that information is structured in particular environments In this precis, we show how simple building blocks that control information search, stop search, and make decisions can be put together to form classes of heuristics, including: ignorance-based and one-reason decision making for choice, elimination models for categorization, and satisficing heuristics for sequential search These simple heuristics perform comparably to more complex algorithms, particularly when generaliz- ing to new data - that is, simplicity leads to robustness We present evidence regarding when people use simple heuristics and describe the challenges to be addressed by this research program

604 citations


Cited by
More filters
Journal ArticleDOI
18 Jun 2018
TL;DR: This work proposes a novel architectural unit, which is term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels and finds that SE blocks produce significant performance improvements for existing state-of-the-art deep architectures at minimal additional computational cost.
Abstract: The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the “Squeeze-and-Excitation” (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at slight additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2.251 percent, surpassing the winning entry of 2016 by a relative improvement of ${\sim }$ ∼ 25 percent. Models and code are available at https://github.com/hujie-frank/SENet .

14,807 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

Journal ArticleDOI
TL;DR: Preface to the Princeton Landmarks in Biology Edition vii Preface xi Symbols used xiii 1.
Abstract: Preface to the Princeton Landmarks in Biology Edition vii Preface xi Symbols Used xiii 1. The Importance of Islands 3 2. Area and Number of Speicies 8 3. Further Explanations of the Area-Diversity Pattern 19 4. The Strategy of Colonization 68 5. Invasibility and the Variable Niche 94 6. Stepping Stones and Biotic Exchange 123 7. Evolutionary Changes Following Colonization 145 8. Prospect 181 Glossary 185 References 193 Index 201

14,171 citations

Book
John R. Koza1
01 Jan 1992
TL;DR: This book discusses the evolution of architecture, primitive functions, terminals, sufficiency, and closure, and the role of representation and the lens effect in genetic programming.
Abstract: Background on genetic algorithms, LISP, and genetic programming hierarchical problem-solving introduction to automatically-defined functions - the two-boxes problem problems that straddle the breakeven point for computational effort Boolean parity functions determining the architecture of the program the lawnmower problem the bumblebee problem the increasing benefits of ADFs as problems are scaled up finding an impulse response function artificial ant on the San Mateo trail obstacle-avoiding robot the minesweeper problem automatic discovery of detectors for letter recognition flushes and four-of-a-kinds in a pinochle deck introduction to biochemistry and molecular biology prediction of transmembrane domains in proteins prediction of omega loops in proteins lookahead version of the transmembrane problem evolutionary selection of the architecture of the program evolution of primitives and sufficiency evolutionary selection of terminals evolution of closure simultaneous evolution of architecture, primitive functions, terminals, sufficiency, and closure the role of representation and the lens effect Appendices: list of special symbols list of special functions list of type fonts default parameters computer implementation annotated bibliography of genetic programming electronic mailing list and public repository

13,487 citations

Book
01 Jan 1996
TL;DR: An Introduction to Genetic Algorithms focuses in depth on a small set of important and interesting topics -- particularly in machine learning, scientific modeling, and artificial life -- and reviews a broad span of research, including the work of Mitchell and her colleagues.
Abstract: From the Publisher: "This is the best general book on Genetic Algorithms written to date. It covers background, history, and motivation; it selects important, informative examples of applications and discusses the use of Genetic Algorithms in scientific models; and it gives a good account of the status of the theory of Genetic Algorithms. Best of all the book presents its material in clear, straightforward, felicitous prose, accessible to anyone with a college-level scientific background. If you want a broad, solid understanding of Genetic Algorithms -- where they came from, what's being done with them, and where they are going -- this is the book. -- John H. Holland, Professor, Computer Science and Engineering, and Professor of Psychology, The University of Michigan; External Professor, the Santa Fe Institute. Genetic algorithms have been used in science and engineering as adaptive algorithms for solving practical problems and as computational models of natural evolutionary systems. This brief, accessible introduction describes some of the most interesting research in the field and also enables readers to implement and experiment with genetic algorithms on their own. It focuses in depth on a small set of important and interesting topics -- particularly in machine learning, scientific modeling, and artificial life -- and reviews a broad span of research, including the work of Mitchell and her colleagues. The descriptions of applications and modeling projects stretch beyond the strict boundaries of computer science to include dynamical systems theory, game theory, molecular biology, ecology, evolutionary biology, and population genetics, underscoring the exciting "general purpose" nature of genetic algorithms as search methods that can be employed across disciplines. An Introduction to Genetic Algorithms is accessible to students and researchers in any scientific discipline. It includes many thought and computer exercises that build on and reinforce the reader's understanding of the text. The first chapter introduces genetic algorithms and their terminology and describes two provocative applications in detail. The second and third chapters look at the use of genetic algorithms in machine learning (computer programs, data analysis and prediction, neural networks) and in scientific models (interactions among learning, evolution, and culture; sexual selection; ecosystems; evolutionary activity). Several approaches to the theory of genetic algorithms are discussed in depth in the fourth chapter. The fifth chapter takes up implementation, and the last chapter poses some currently unanswered questions and surveys prospects for the future of evolutionary computation.

9,933 citations