scispace - formally typeset
Search or ask a question
Posted Content

The Generalized Universal Law of Generalization

TL;DR: In this paper, the authors show that the universal law of generalization holds with probability going to one-provided the confusion probabilities are computable, and they also give a mathematically more appealing form.
Abstract: It has been argued by Shepard that there is a robust psychological law that relates the distance between a pair of items in psychological space and the probability that they will be confused with each other. Specifically, the probability of confusion is a negative exponential function of the distance between the pair of items. In experimental contexts, distance is typically defined in terms of a multidimensional Euclidean space-but this assumption seems unlikely to hold for complex stimuli. We show that, nonetheless, the Universal Law of Generalization can be derived in the more complex setting of arbitrary stimuli, using a much more universal measure of distance. This universal distance is defined as the length of the shortest program that transforms the representations of the two items of interest into one another: the algorithmic information distance. It is universal in the sense that it minorizes every computable distance: it is the smallest computable distance. We show that the universal law of generalization holds with probability going to one-provided the confusion probabilities are computable. We also give a mathematically more appealing form
Citations
More filters
Journal ArticleDOI
TL;DR: It is argued that the top-down approach to modeling cognition yields greater flexibility for exploring the representations and inductive biases that underlie human cognition.

464 citations

Journal ArticleDOI
TL;DR: A novel, principled and unified technique for pattern analysis and generation that ensures computational efficiency and enables a straightforward incorporation of domain knowledge will be presented and has the potential to reduce computational time significantly.
Abstract: The advent of multiple-point geostatistics (MPS) gave rise to the integration of complex subsurface geological structures and features into the model by the concept of training images Initial algorithms generate geologically realistic realizations by using these training images to obtain conditional probabilities needed in a stochastic simulation framework More recent pattern-based geostatistical algorithms attempt to improve the accuracy of the training image pattern reproduction In these approaches, the training image is used to construct a pattern database Consequently, sequential simulation will be carried out by selecting a pattern from the database and pasting it onto the simulation grid One of the shortcomings of the present algorithms is the lack of a unifying framework for classifying and modeling the patterns from the training image In this paper, an entirely different approach will be taken toward geostatistical modeling A novel, principled and unified technique for pattern analysis and generation that ensures computational efficiency and enables a straightforward incorporation of domain knowledge will be presented In the developed methodology, patterns scanned from the training image are represented as points in a Cartesian space using multidimensional scaling The idea behind this mapping is to use distance functions as a tool for analyzing variability between all the patterns in a training image These distance functions can be tailored to the application at hand Next, by significantly reducing the dimensionality of the problem and using kernel space mapping, an improved pattern classification algorithm is obtained This paper discusses the various implementation details to accomplish these ideas Several examples are presented and a qualitative comparison is made with previous methods An improved pattern continuity and data-conditioning capability is observed in the generated realizations for both continuous and categorical variables We show how the proposed methodology is much less sensitive to the user-provided parameters, and at the same time has the potential to reduce computational time significantly

287 citations

Journal ArticleDOI
TL;DR: How and why the P-Cognition thesis may be overly restrictive is explained, risking the exclusion of veridical computational-level theories from scientific investigation, and an argument is made to replace the Tractable Cognition thesis by the FPT-Cognitive thesis as an alternative formalization.

215 citations

Journal ArticleDOI
TL;DR: This paper presents a series of analyses of phonological cues and distributional cues and their potential for distinguishing grammatical categories of words in corpus analyses and indicates that phonological and Distributional cues contribute differentially towards grammatical categorisation.

205 citations

Journal ArticleDOI
TL;DR: It is suggested that, for the vast majority of classic findings in cognitive science, embodied cognition offers no scientifically valuable insight and is also unable to adequately address the basic experiences of cognitive life.
Abstract: In recent years, there has been rapidly growing interest in embodied cognition, a multifaceted theoretical proposition that (1) cognitive processes are influenced by the body, (2) cognition exists in the service of action, (3) cognition is situated in the environment, and (4) cognition may occur without internal representations. Many proponents view embodied cognition as the next great paradigm shift for cognitive science. In this article, we critically examine the core ideas from embodied cognition, taking a "thought exercise" approach. We first note that the basic principles from embodiment theory are either unacceptably vague (e.g., the premise that perception is influenced by the body) or they offer nothing new (e.g., cognition evolved to optimize survival, emotions affect cognition, perception-action couplings are important). We next suggest that, for the vast majority of classic findings in cognitive science, embodied cognition offers no scientifically valuable insight. In most cases, the theory has no logical connections to the phenomena, other than some trivially true ideas. Beyond classic laboratory findings, embodiment theory is also unable to adequately address the basic experiences of cognitive life.

125 citations

References
More filters
Journal ArticleDOI
TL;DR: The metric and dimensional assumptions that underlie the geometric representation of similarity are questioned on both theoretical and empirical grounds and a set of qualitative assumptions are shown to imply the contrast model, which expresses the similarity between objects as a linear combination of the measures of their common and distinctive features.
Abstract: The metric and dimensional assumptions that underlie the geometric representation of similarity are questioned on both theoretical and empirical grounds. A new set-theoretical approach to similarity is developed in which objects are represented as collections of features, and similarity is described as a feature-matching process. Specifically, a set of qualitative assumptions is shown to imply the contrast model, which expresses the similarity between objects as a linear combination of the measures of their common and distinctive features. Several predictions of the contrast model are tested in studies of similarity with both semantic and perceptual stimuli. The model is used to uncover, analyze, and explain a variety of empirical phenomena such as the role of common and distinctive features, the relations between judgments of similarity and difference, the presence of asymmetric similarities, and the effects of context on judgments of similarity. The contrast model generalizes standard representations of similarity data in terms of clusters and trees. It is also used to analyze the relations of prototypicality and family resemblance

7,251 citations

01 Jan 1964
TL;DR: In this paper, the notion of a collective unconscious was introduced as a theory of remembering in social psychology, and a study of remembering as a study in Social Psychology was carried out.
Abstract: Part I. Experimental Studies: 2. Experiment in psychology 3. Experiments on perceiving III Experiments on imaging 4-8. Experiments on remembering: (a) The method of description (b) The method of repeated reproduction (c) The method of picture writing (d) The method of serial reproduction (e) The method of serial reproduction picture material 9. Perceiving, recognizing, remembering 10. A theory of remembering 11. Images and their functions 12. Meaning Part II. Remembering as a Study in Social Psychology: 13. Social psychology 14. Social psychology and the matter of recall 15. Social psychology and the manner of recall 16. Conventionalism 17. The notion of a collective unconscious 18. The basis of social recall 19. A summary and some conclusions.

5,690 citations

Journal ArticleDOI
TL;DR: Recognition-by-components (RBC) provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition.
Abstract: The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory, recognition-by-components (RBC), is that a modest set of generalized-cone components, called geons (N £ 36), can be derived from contrasts of five readily detectable properties of edges in a two-dimensiona l image: curvature, collinearity, symmetry, parallelism, and cotermination. The detection of these properties is generally invariant over viewing position an$ image quality and consequently allows robust object perception when the image is projected from a novel viewpoint or is degraded. RBC thus provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition: The constraints toward regularization (Pragnanz) characterize not the complete object but the object's components. Representational power derives from an allowance of free combinations of the geons. A Principle of Componential Recovery can account for the major phenomena of object recognition: If an arrangement of two or three geons can be recovered from the input, objects can be quickly recognized even when they are occluded, novel, rotated in depth, or extensively degraded. The results from experiments on the perception of briefly presented pictures by human observers provide empirical support for the theory. Any single object can project an infinity of image configurations to the retina. The orientation of the object to the viewer can vary continuously, each giving rise to a different two-dimensional projection. The object can be occluded by other objects or texture fields, as when viewed behind foliage. The object need not be presented as a full-colored textured image but instead can be a simplified line drawing. Moreover, the object can even be missing some of its parts or be a novel exemplar of its particular category. But it is only with rare exceptions that an image fails to be rapidly and readily classified, either as an instance of a familiar object category or as an instance that cannot be so classified (itself a form of classification).

5,464 citations

Book ChapterDOI
01 Jan 1988

4,707 citations

Journal ArticleDOI
TL;DR: Differences between Connectionist proposals for cognitive architecture and the sorts of models that have traditionally been assumed in cognitive science are explored and the possibility that Connectionism may provide an account of the neural structures in which Classical cognitive architecture is implemented is considered.

3,454 citations