scispace - formally typeset
SciSpace - Your AI assistant to discover and understand research papers | Product Hunt

Proceedings Article

Quantification and the language of thought

07 Dec 2009-Vol. 22, pp 943-951

TL;DR: To support this proposal, behavioral results from a concept learning study inspired by the work of Shepard, Hovland and Jenkins are presented and it is shown that the language of thought allows first order quantification more readily than second-order quantification.

AbstractMany researchers have suggested that the psychological complexity of a concept is related to the length of its representation in a language of thought. As yet, however, there are few concrete proposals about the nature of this language. This paper makes one such proposal: the language of thought allows first order quantification (quantification over objects) more readily than second-order quantification (quantification over features). To support this proposal we present behavioral results from a concept learning study inspired by the work of Shepard, Hovland and Jenkins.

...read more

Content maybe subject to copyright    Report

Citations
More filters

Journal ArticleDOI
TL;DR: This work shows how different sets of LOT primitives, embedded in a psychologically realistic approximate Bayesian inference framework, systematically predict distinct learning curves in rule-based concept learning experiments, and shows how specific LOT theories can be distinguished empirically.
Abstract: The notion of a compositional language of thought (LOT) has been central in computational accounts of cognition from earliest attempts (Boole, 1854; Fodor, 1975) to the present day (Feldman, 2000; Penn, Holyoak, & Povinelli, 2008; Fodor, 2008; Kemp, 2012; Goodman, Tenenbaum, & Gerstenberg, 2015). Recent modeling work shows how statistical inferences over compositionally structured hypothesis spaces might explain learning and development across a variety of domains. However, the primitive components of such representations are typically assumed a priori by modelers and theoreticians rather than determined empirically. We show how different sets of LOT primitives, embedded in a psychologically realistic approximate Bayesian inference framework, systematically predict distinct learning curves in rule-based concept learning experiments. We use this feature of LOT models to design a set of large-scale concept learning experiments that can determine the most likely primitives for psychological concepts involving Boolean connectives and quantification. Subjects' inferences are most consistent with a rich (nonminimal) set of Boolean operations, including first-order, but not second-order, quantification. Our results more generally show how specific LOT theories can be distinguished empirically. (PsycINFO Database Record

87 citations


Cites background or methods or result from "Quantification and the language of ..."

  • ...We build on recently developed computational tools and empirical techniques that have allowed detailed modeling of how people learn symbolically structured concepts across a variety of domains (e.g., Nosofsky, Palmeri, & McKinley, 1994; Kemp, Goodman, & Tenenbaum, 2008a; Kemp, 2009; Piantadosi, 2011; Kemp, 2012; Ullman et al., 2012)....

    [...]

  • ...This provides strong evidence for quantification in the LOT, in line with Kemp (2009); the superiority of a grammar with multiple types of quantifiers indicates that, like the Boolean results, quantificational operations in the LOT do not make use of a “minimal” basis of operations (such as just…...

    [...]

  • ...We find it compelling that, despite these differences, we find qualitatively similar results, including a tendency for quantification over objects but not features (Kemp, 2009, 2012)....

    [...]

  • ...Building on Kemp (2009, 2012), and motivated by both classic work in formal semantics (Montague, 1973), AI (Levesque et al., 1998; Muggleton & De Raedt, 1994; Milch et al., 2004; Russell & Norvig, 2009; Domingos & Richardson, 2007; Richardson & Domingos, 2006; Goodman, Mansinghka, et al. 2008;…...

    [...]

  • ...…tools and empirical techniques that have allowed detailed modeling of how people learn symbolically structured concepts across a variety of domains (e.g., Nosofsky, Palmeri, & McKinley, 1994; Kemp, Goodman, & Tenenbaum, 2008a; Kemp, 2009; Piantadosi, 2011; Kemp, 2012; Ullman et al., 2012)....

    [...]


Dissertation
01 Jan 2011
TL;DR: An inductive statistical model is presented over a compositionally structured representation system, a language of thought (LOT) (Fodor, 1975), that formalizes an optimal Bayesian trade-off between representational complexity and fit to the observed data.
Abstract: Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2011.

24 citations


Cites result from "Quantification and the language of ..."

  • ...Building off of previous experimental and computational studies (Kemp, 2009; Piantadosi et al., 2009), we extend the modeling results to the wider range of concepts from our experiment that involve quantification and relational terms....

    [...]



Journal Article
TL;DR: A data analysis technique is developed for a family of compositional “Language of Thought” (LOT) models which permits discovery of subjects’ prior probability of mental operations in this domain and reveals high correlations between model mean predictions and subject generalizations.
Abstract: We apply Bayesian data analysis to a structured cognitive model in order to determine the priors that support human generalizations in a simple concept learning task. We modeled 250,000 ratings in a “number game” experiment where subjects took examples of a numbers produced by a program (e.g. 4, 16, 32) and rated how likely other numbers (e.g. 8 vs. 9) would be to be generated. This paper develops a data analysis technique for a family of compositional “Language of Thought” (LOT) models which permits discovery of subjects’ prior probability of mental operations (e.g. addition, multiplication, etc.) in this domain. Our results reveal high correlations between model mean predictions and subject generalizations, but with some qualitative mismatch for a strongly compositional prior.

2 citations


Cites methods from "Quantification and the language of ..."

  • ...To test these assumptions, different particular LOTs have been tested to compare, for instance, LOT theories with distinct types of quantification or varying sets of boolean operations (Kemp, 2009, 2012; S. Piantadosi, 2011; S. T. Piantadosi, Tenenbaum, & Goodman, under review)....

    [...]


References
More filters

Book
01 Jan 1975
Abstract: In a compelling defense of the speculative approach to the philosophy of mind, Jerry Fodor argues that, while our best current theories of cognitive psychology view many higher processes as computational, computation itself presupposes an internal medium of representation. Fodor's prime concerns are to buttress the notion of internal representation from a philosophical viewpoint, and to determine those characteristics of this conceptual construct using the empirical data available from linguistics and cognitive psychology.

4,287 citations


Book
01 Jan 1972
TL;DR: A comparison of first- and second-order logic in the case of SETs shows that the former is more likely to be correct and the latter is less likely.
Abstract: USEFUL FACTS ABOUT SETS. SENTENTIAL LOGIC. FIRST-ORDER LOGIC. UNDECIDABILITY. SECOND-ORDER LOGIC.

2,134 citations


"Quantification and the language of ..." refers background in this paper

  • ...If partitions that differ only up to a permutation of the features (Domain 3) or objects (Domain 4) are grouped into equivalence classes, there are ten of these classes, and a representative of each is shown in Figure 1b. Previous researchers [6] have pointed out that the stimuli in Domain 1 can be…...

    [...]


Proceedings ArticleDOI
25 Oct 2008
TL;DR: This work explores the use of Amazon's Mechanical Turk system, a significantly cheaper and faster method for collecting annotations from a broad base of paid non-expert contributors over the Web, and proposes a technique for bias correction that significantly improves annotation quality on two tasks.
Abstract: Human linguistic annotation is crucial for many natural language processing tasks but can be expensive and time-consuming. We explore the use of Amazon's Mechanical Turk system, a significantly cheaper and faster method for collecting annotations from a broad base of paid non-expert contributors over the Web. We investigate five tasks: affect recognition, word similarity, recognizing textual entailment, event temporal ordering, and word sense disambiguation. For all five, we show high agreement between Mechanical Turk non-expert annotations and existing gold standard labels provided by expert labelers. For the task of affect recognition, we also show that using non-expert labels for training machine learning algorithms can be as effective as using gold standard annotations from experts. We propose a technique for bias correction that significantly improves annotation quality on two tasks. We conclude that many large labeling tasks can be effectively designed and carried out in this method at a fraction of the usual expense.

2,086 citations


Journal ArticleDOI
TL;DR: "Games with a purpose" have a vast range of applications in areas as diverse as security, computer vision, Internet accessibility, adult content filtering, and Internet search, and any game designed to address these and other problems must ensure that game play results in a correct solution and, at the same time, is enjoyable.
Abstract: Through online games, people can collectively solve large-scale computational problems. Such games constitute a general mechanism for using brain power to solve open problems. In fact, designing such a game is much like designing an algorithm - it must be proven correct, its efficiency can be analyzed, a more efficient version can supersede a less efficient one, and so on. "Games with a purpose" have a vast range of applications in areas as diverse as security, computer vision, Internet accessibility, adult content filtering, and Internet search. Any game designed to address these and other problems must ensure that game play results in a correct solution and, at the same time, is enjoyable. People will play such games to be entertained, not to solve a problem - no matter how laudable the objective

1,014 citations


Book
17 Jan 2007
Abstract: Self-taught mathematician and father of Boolean algebra, George Boole (1815–1864) published An Investigation of the Laws of Thought in 1854. In this highly original investigation of the fundamental laws of human reasoning, a sequel to ideas he had explored in earlier writings, Boole uses the symbolic language of mathematics to establish a method to examine the nature of the human mind using logic and the theory of probabilities. Boole considers language not just as a mode of expression, but as a system one can use to understand the human mind. In the first 12 chapters, he sets down the rules necessary to represent logic in this unique way. Then he analyses a variety of arguments and propositions of various writers from Aristotle to Spinoza. One of history's most insightful mathematicians, Boole is compelling reading for today's student of intellectual history and the science of the mind.

846 citations