scispace - formally typeset
Search or ask a question
Author

Amy Perfors

Bio: Amy Perfors is an academic researcher from University of Melbourne. The author has contributed to research in topics: Language acquisition & Categorization. The author has an hindex of 25, co-authored 100 publications receiving 2885 citations. Previous affiliations of Amy Perfors include University of Adelaide & Stanford University.


Papers
More filters
Journal ArticleDOI
TL;DR: Speed and accuracy in spoken word recognition at 25 months were correlated with measures of lexical and grammatical development from 12 to 25 months and those who showed faster and more accelerated growth in expressive vocabulary across the 2nd year were found.
Abstract: To explore how online speech processing efficiency relates to vocabulary growth in the 2nd year, the authors longitudinally observed 59 English-learning children at 15, 18, 21, and 25 months as they looked at pictures while listening to speech naming one of the pictures. The time course of eye movements in response to speech revealed significant increases in the efficiency of comprehension over this period. Further, speed and accuracy in spoken word recognition at 25 months were correlated with measures of lexical and grammatical development from 12 to 25 months. Analyses of growth curves showed that children who were faster and more accurate in online comprehension at 25 months were those who showed faster and more accelerated growth in expressive vocabulary across the 2nd year.

506 citations

Journal ArticleDOI
TL;DR: It is argued that the top-down approach to modeling cognition yields greater flexibility for exploring the representations and inductive biases that underlie human cognition.

464 citations

Journal ArticleDOI
TL;DR: This work presents an introduction to Bayesian inference as it is used in probabilistic models of cognitive development, and discusses some important interpretation issues that often arise when evaluating Bayesian models in cognitive science.

211 citations

Journal ArticleDOI
TL;DR: This work describes the collection of word associations for over 12,000 cue words, currently the largest such English-language resource in the world, and shows that measures based on a mechanism of spreading activation derived from this new resource are highly predictive of direct judgments of similarity.
Abstract: Word associations have been used widely in psychology, but the validity of their application strongly depends on the number of cues included in the study and the extent to which they probe all associations known by an individual. In this work, we address both issues by introducing a new English word association dataset. We describe the collection of word associations for over 12,000 cue words, currently the largest such English-language resource in the world. Our procedure allowed subjects to provide multiple responses for each cue, which permits us to measure weak associations. We evaluate the utility of the dataset in several different contexts, including lexical decision and semantic categorization. We also show that measures based on a mechanism of spreading activation derived from this new resource are highly predictive of direct judgments of similarity. Finally, a comparison with existing English word association sets further highlights systematic improvements provided through these new norms.

192 citations

Journal Article
TL;DR: This article used a Bayesian framework for grammar induction and showed that an ideal learner could recognize the hierarchical phrase structure of language without having this knowledge innately specified as part of the language faculty.

191 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This target article critically examines this "hierarchical prediction machine" approach, concluding that it offers the best clue yet to the shape of a unified science of mind and action.
Abstract: Brains, it has recently been argued, are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions. This is achieved using a hierarchical generative model that aims to minimize prediction error within a bidirectional cascade of cortical processing. Such accounts offer a unifying model of perception and action, illuminate the functional role of attention, and may neatly capture the special contribution of cortical processing to adaptive success. This target article critically examines this "hierarchical prediction machine" approach, concluding that it offers the best clue yet to the shape of a unified science of mind and action. Sections 1 and 2 lay out the key elements and implications of the approach. Section 3 explores a variety of pitfalls and challenges, spanning the evidential, the methodological, and the more properly conceptual. The paper ends (sections 4 and 5) by asking how such approaches might impact our more general vision of mind, experience, and agency.

3,640 citations

Journal ArticleDOI
01 Jun 1959

3,442 citations

Posted Content
TL;DR: It is argued that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective.
Abstract: Artificial intelligence (AI) has undergone a renaissance recently, making major progress in key domains such as vision, language, control, and decision-making. This has been due, in part, to cheap data and cheap compute resources, which have fit the natural strengths of deep learning. However, many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches. In particular, generalizing beyond one's experiences--a hallmark of human intelligence from infancy--remains a formidable challenge for modern AI. The following is part position paper, part review, and part unification. We argue that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective. Just as biology uses nature and nurture cooperatively, we reject the false choice between "hand-engineering" and "end-to-end" learning, and instead advocate for an approach which benefits from their complementary strengths. We explore how using relational inductive biases within deep learning architectures can facilitate learning about entities, relations, and rules for composing them. We present a new building block for the AI toolkit with a strong relational inductive bias--the graph network--which generalizes and extends various approaches for neural networks that operate on graphs, and provides a straightforward interface for manipulating structured knowledge and producing structured behaviors. We discuss how graph networks can support relational reasoning and combinatorial generalization, laying the foundation for more sophisticated, interpretable, and flexible patterns of reasoning. As a companion to this paper, we have released an open-source software library for building graph networks, with demonstrations of how to use them in practice.

2,170 citations

Journal ArticleDOI
TL;DR: In this article, a review of recent progress in cognitive science suggests that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn and how they learn it.
Abstract: Recent progress in artificial intelligence has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats that of humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn and how they learn it. Specifically, we argue that these machines should (1) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (2) ground learning in intuitive theories of physics and psychology to support and enrich the knowledge that is learned; and (3) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes toward these goals that can combine the strengths of recent neural network advances with more structured cognitive models.

2,010 citations