scispace - formally typeset
Open AccessProceedings Article

Quantification and the language of thought

Charles Kemp
- Vol. 22, pp 943-951
TLDR
To support this proposal, behavioral results from a concept learning study inspired by the work of Shepard, Hovland and Jenkins are presented and it is shown that the language of thought allows first order quantification more readily than second-order quantification.
Abstract
Many researchers have suggested that the psychological complexity of a concept is related to the length of its representation in a language of thought. As yet, however, there are few concrete proposals about the nature of this language. This paper makes one such proposal: the language of thought allows first order quantification (quantification over objects) more readily than second-order quantification (quantification over features). To support this proposal we present behavioral results from a concept learning study inspired by the work of Shepard, Hovland and Jenkins.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

The logical primitives of thought: Empirical foundations for compositional cognitive models.

TL;DR: This work shows how different sets of LOT primitives, embedded in a psychologically realistic approximate Bayesian inference framework, systematically predict distinct learning curves in rule-based concept learning experiments, and shows how specific LOT theories can be distinguished empirically.
Dissertation

Learning and the language of thought

TL;DR: An inductive statistical model is presented over a compositionally structured representation system, a language of thought (LOT) (Fodor, 1975), that formalizes an optimal Bayesian trade-off between representational complexity and fit to the observed data.
Journal Article

Inferring priors in compositional cognitive models.

TL;DR: A data analysis technique is developed for a family of compositional “Language of Thought” (LOT) models which permits discovery of subjects’ prior probability of mental operations in this domain and reveals high correlations between model mean predictions and subject generalizations.
References
More filters
Journal ArticleDOI

Schematic influences on category learning and recognition memory.

TL;DR: The authors propose a clustering account in which deviant items are better remembered because they are differentiated from clusters that capture regularities, which is akin to that of schemas.
Journal ArticleDOI

Extending the ALCOVE model of category learning to featural stimulus domains.

TL;DR: A featural version of the ALCOVE model is developed, replacing the spatial stimulus representations that are usually generated by multidimensional scaling with featural representations generated by additive clustering, and it is demonstrated that it is able to capture human performance where the spatial model failed.
Book ChapterDOI

Balance, Agreement, and Positivity in the Cognition of Small Social Structures

TL;DR: In this paper, a conceptual rule is defined as an a priori principle that can be used to organize the relations among a set of objects, and its application to three broad classes of relations: sentiments, attitudes, and unit relations.
Journal ArticleDOI

A note on the complexity of Boolean concepts

TL;DR: A heuristic procedure for reducing Boolean formulae is introduced, based in part on the well-established minimization technique from Boolean algebra known as the Quine–McCluskey (QM) method, which when applied to the SHJ Boolean concept types reveals that some of their complexity values are notably different from the approximate values obtained by Feldman.
Proceedings Article

Learning and using relational theories

TL;DR: It is proposed that intuitive theories are mentally represented in a logical language, and that the subjective complexity of a theory is determined by the length of its representation in this language.