scispace - formally typeset
Search or ask a question

Showing papers in "Psychological Review in 1991"


Journal ArticleDOI
TL;DR: Theories of the self from both psychology and anthropology are integrated to define in detail the difference between a construal of self as independent and a construpal of the Self as interdependent as discussed by the authors, and these divergent construals should have specific consequences for cognition, emotion, and motivation.
Abstract: People in different cultures have strikingly different construals of the self, of others, and of the interdependence of the 2. These construals can influence, and in many cases determine, the very nature of individual experience, including cognition, emotion, and motivation. Many Asian cultures have distinct conceptions of individuality that insist on the fundamental relatedness of individuals to each other. The emphasis is on attending to others, fitting in, and harmonious interdependence with them. American culture neither assumes nor values such an overt connectedness among individuals. In contrast, individuals seek to maintain their independence from others by attending to the self and by discovering and expressing their unique inner attributes. As proposed herein, these construals are even more powerful than previously imagined. Theories of the self from both psychology and anthropology are integrated to define in detail the difference between a construal of the self as independent and a construal of the self as interdependent. Each of these divergent construals should have a set of specific consequences for cognition, emotion, and motivation; these consequences are proposed and relevant empirical literature is reviewed. Focusing on differences in self-construals enables apparently inconsistent empirical findings to be reconciled, and raises questions about what have been thought to be culture-free aspects of cognition, emotion, and motivation.

18,178 citations


Journal ArticleDOI
TL;DR: A comprehensive framework for the theory of probabilistic mental models (PMM theory) is proposed, which explains both the overconfidence effect and the hard-easy effect and predicts conditions under which both effects appear, disappear, or invert.
Abstract: Research on people's confidence in their general knowledge has to date produced two fairly stable effects, many inconsistent results, and no comprehensive theory. We propose such a comprehensive framework, the theory of probabilistic mental models (PMM theory). The theory (a) explains both the overconfidence effect (mean confidence is higher than percentage of answers correct) and the hard-easy effect (overconfidence increases with item difficulty) reported in the literature and (b) predicts conditions under which both effects appear, disappear, or invert. In addition, (c) it predicts a new phenomenon, the confidence-frequency effect, a systematic difference between a judgment of confidence in a single event (i.e., that any given answer is correct) and a judgment of the frequency of correct answers in the long run. Two experiments are reported that support PMM theory by confirming these predictions, and several apparent anomalies reported in the literature are explained and integrated into the present framework. Do people think they know more than they really do? In the last 15 years, cognitive psychologists have amassed a large and apparently damning body of experimental evidence on overconfidence in knowledge, evidence that is in turn part of an even larger and more damning literature on so-called cognitive biases. The cognitive bias research claims that people are naturally prone to making mistakes in reasoning and memory, including the mistake of overestimating their knowledge. In this article, we propose a new theoretical model for confidence in knowledge based on the more charitable assumption that people are good judges of the reliability of their knowledge, provided that the knowledge is representatively sampled from a specified reference class. We claim that this model both predicts new experimental results (that we have tested) and explains a wide range of extant experimental findings on confidence, including some perplexing inconsistencies.

1,422 citations


Journal ArticleDOI
TL;DR: Evidence from newborns leads to the conclusion that infants are born with some information about the structure of faces, which guides the preference for facelike patterns found in newborn infants, and a distinction between these 2 independent mechanisms allows a reconciliation of the conflicting data on the development of face recognition in human infants.
Abstract: Evidence from newborns leads to the conclusion that infants are born with some information about the structure of faces. This structural information, termed CONSPEC, guides the preference for facelike patterns found in newborn infants. CONSPEC is contrasted with a device termed CONLERN, which is responsible for learning about the visual characteristics of conspecifics. In the human infant, CONLERN does not influence looking behavior until 2 months of age. The distinction between these 2 independent mechanisms allows a reconciliation of the conflicting data on the development of face recognition in human infants. Finally, evidence from another species, the domestic chick, for which a similar 2-process theory has already been put forward, is discussed. The new nomenclature is applied to the chick and used as a basis for comparison with the infant.

1,002 citations


Journal ArticleDOI
TL;DR: The basic theory of categorization developed in Anderson (1990) is presented and the theory has been greatly extended and applied to many new phenomena and new developments and applications are described.
Abstract: A rational model of human categorization behavior is presented that assumes that categorization reflects the derivation of optimal estimates of the probability of unseen features of objects. A Bayesian analysis is performed of what optimal estimations would be if categories formed a disjoint partitioning of the object space and if features were independently displayed within a category. This Bayesian analysis is placed within an incremental categorization algorithm. The resulting rational model accounts for effects of central tendency of categories, effects of specific instances, learning of linearly nonseparable categories, effects of category labels, extraction of basic level categories, base-rate effects, probability matching in categorization, and trial-by-trial learning functions. Although the rational model considers just I level of categorization, it is shown how predictions can be enhanced by considering higher and lower levels. Considering prediction at the lower, individual level allows integration of this rational analysis of categorization with the earlier rational analysis of memory (Anderson & Milson, 1989). Anderson (1990) presented a rational analysis ot 6 human cognition. The term rational derives from similar "rational-man" analyses in economics. Rational analyses in other fields are sometimes called adaptationist analyses. Basically, they are efforts to explain the behavior in some domain on the assumption that the behavior is optimized with respect to some criteria of adaptive importance. This article begins with a general characterization ofhow one develops a rational theory of a particular cognitive phenomenon. Then I present the basic theory of categorization developed in Anderson (1990) and review the applications from that book. Since the writing of the book, the theory has been greatly extended and applied to many new phenomena. Most of this article describes these new developments and applications.

914 citations


Journal ArticleDOI
TL;DR: A formal 2-dimensional conception of autonomic space is proposed, and a quantitative model for its translation into a functional output surface is derived and has fundamental implications for the direction and interpretation of a wide array of psychophysiological studies.
Abstract: Contemporary findings reveal that the multiple modes of autonomic control do not lie along a single continuum extending from parasympathetic to sympathetic dominance but rather distribute within a 2-dimensional space. The physiological origins and empirical documentation for the multiple modes of autonomic control are considered. Then a formal 2-dimensional conception of autonomic space is proposed, and a quantitative model for its translation into a functional output surface is derived. It is shown that this model (a) accounts for much of the error variance that has traditionally plagued psychophysiological studies, (b) subsumes psychophysiological principles such as the law of initial values, (c) gives rise to formal laws of autonomic constraint, and (d) has fundamental implications for the direction and interpretation of a wide array of psychophysiological studies.

752 citations


Journal ArticleDOI
TL;DR: The proposed model has broad implications; notably, it has the potential to explain biases of the sort described in psychophysics as well as symmetries in similarity judgments, without positing distorted representations of physical scales.
Abstract: A model of category effects on reports from memory is presented. The model holds that stimuli are represented at 2 levels of detail: a fine-grain value and a category. When memory is inexact but people must report an exact value, they use estimation processes that combine the remembered stimulus value with category information. The proposed estimation processes include truncation at category boundaries and weighting with a central (prototypic) category value. These processes introduce bias in reporting even when memory is unbiased, but nevertheless may improve overall accuracy (by decreasing the variability of reports). Four experiments are presented in which people report the location of a dot in a circle. Subjects spontaneously impose horizontal and vertical boundaries that divide the circle into quadrants. They misplace dots toward a central (prototypic) location in each quadrant, as predicted by the model. The proposed model has broad implications; notably, it has the potential to explain biases of the sort described in psychophysics (contraction bias and the bias captured by Weber's law) as well as symmetries in similarity judgments, without positing distorted representations of physical scales.

720 citations


Journal ArticleDOI
TL;DR: In this paper, a recurrent connectionist network was trained to output semantic feature vectors when presented with letter strings, and when damaged, the network exhibited characteristics that resembled several of the phenomena found in deep dyslexia and semantic access dyslexias.
Abstract: A recurrent connectionist network was trained to output semantic feature vectors when presented with letter strings. When damaged, the network exhibited characteristics that resembled several of the phenomena found in deep dyslexia and semantic-access dyslexia. Damaged networks sometimes settled to the semantic vectors for semantically similar but visually dissimilar words. With severe damage, a forced-choice decision between categories was possible even when the choice of the particular semantic vector within the category was not possible. The damaged networks typically exhibited many mixed visual and semantic errors in which the output corresponded to a word that was both visually and semantically similar. Surprisingly, damage near the output sometimes caused pure visual errors. Indeed, the characteristic error pattern of deep dyslexia occurred with damage to virtually any part of the network.

635 citations


Journal ArticleDOI
TL;DR: Whether the selection of an item and its phonologi- cal encoding can be considered to occur in two successive, non- overlapping stages is examined.
Abstract: Nijmegen University Nijmegen, The Netherlands Lexical access in object naming involves the activation of a set oflexical candidates, the selection of the appropriate (or target) item, and the phonological encoding of that item. Two views of lexical access in naming are compared. From one view, the 2-stage theory, phonological activation follows selection of the target item and is restricted to that item. From the other view, which is most explicit in activation-spreading theories, all activated lexical candidates are phonologically activated to some extent. A series of experiments is reported in which subjects performed acoustic lexical decision during object naming at different stimulus-onset asynchronies. The experiments show semantic activation of lexical candidates and phonological activation of the target item, but no phonological activation of other semantically activated items. This supports the 2-stage view. More- over, a mathematical model embodying the 2-stage view is fully compatible with the lexical deci- sion data obtained at different stimulus-onset asynchronies. One of a speaker's core skills is to lexicalize the concepts intended for expression. Lexicalization proceeds at a rate of two to three words per second in normal spontaneous speech, but doubling this rate is possible and not exceptional. The skill of lexicalizing a content word involves two components. The first one is to select the appropriate lexical item from among some tens of thousands of alternatives in the mental lexicon. The second one is to phonologically encode the selected item, that is, to retrieve its sound form, to create a phonological represen- tation for the item in its context, and to prepare its articulatory program. An extensive review of the literature on lexicalization can be found in Levelt (1989). This article addresses only one aspect of lexicalization, namely its time course. In particular, we examine whether the selection of an item and its phonologi- cal encoding can be considered to occur in two successive, non- overlapping stages. We acknowledge the invaluable contributions of John Nagengast and Johan Weustink, who programmed the computer-based experi- ments; ofGer Desserjer and Hans Fransen, who ran the experiments and assisted in data analysis; and of lnge Tarim, who provided graphi- cal assistance. We also acknowledge Gary Dell's and Picnic Zwitser- lood's detailed comments on an earlier version of this article, as well as the thorough comments of an anonymous reviewer. Herbert Schriefers is now at Freie Universit~it Berlin, Berlin, Federal Republic of Germany, and Thomas Pechmann is now at Universit~it des Saarlandes, Saarbriicken, Federal Republic of Germany. Correspondence concerning this article should be addressed to Wil- lem J. M. Levelt, Max-Planck-lnstitut for Psycholinguistik, Wundtlaan 1, NL-6525 XD Nijmegen, The Netherlands. 122 This is by no means a novel concept. One should rather say that it is the received view in the psycholinguistic literature (see especially Butterworth, 1980, 1989; Fromkin, 1971; Garrett, 1975, 1976, 1980; Kempen, 1977, 1978; Kempen & Huijbers, 1983; Levelt, 1983, 1989; Levelt & Maassen, 1981; Morton, 1969; Schriefers, Meyer, & Levelt, 1990). The first stage, lexical selection, makes available a semantically specified lexical item with its syntactic constraints. Kempen (1977, 1978) called this a lemma. Lemmas figure in grammatical encoding, specifically in the creation of syntactic frames. During the second stage, phonological encoding, phonological information is retrieved for each lemma. These phonological codes are used to create the articulatory plan for the utterance as a whole. Both Garrett (1976) and Kempen (1978), following Fry (1969), have stressed that the grammatical encoding and phonological encoding of an utterance normally run in parallel. Grammatical encoding, of which lexical selection is a proper part, is just slightly ahead of phonological encoding. The phonological encoding of a given item overlaps in time with the selection of a subsequent item. Only at the level of individual lexical items can one speak of successive stages. An item's semantic-syntactic makeup is accessed and used before its phonological makeup becomes available. Garrett (1975, 1976) argued for this separation of stages on the basis of speech error data. He distinguished between two classes of errors, word exchanges and sound exchanges, and could show that these classes differ in distributional properties. Word exchanges occur between phrases and involve words of the same syntactic category (as in this spring has a seat in it). Sound exchanges typically involve different category words in the same phrase (as in heft lemisphere). Word exchanges are

603 citations


Journal ArticleDOI
TL;DR: The tools-to-theories heuristic can be used to detect both limitations and new lines of development in cognitive theories that investigate the mind as an "intuitive statistician" as discussed by the authors.
Abstract: The study of scientific discovery--where do new ideas come from? has long been denigrated by philosophers as irrelevant to analyzing the growth of scientific knowledge. In particular, little is known about how cognitive theories are discovered, and neither the classical accounts of discovery as either probabilistic induction (e.g., Reichenbach, 1938) or lucky guesses (e.g., Popper, 1959), nor the stock anecdotes about sudden "eureka" moments deepen the insight into discovery. A heuristics approach is taken in this review, where heuristics are understood as strategies of discovery less general than a supposed unique logic of discovery but more general than lucky guesses. This article deals with how scientists' tools shape theories of mind, in particular with how methods of statistical inference have turned into metaphors of mind. The tools-to-theories heuristic explains the emergence of a broad range of cognitive theories, from the cognitive revolution of the 1960s up to the present, and it can be used to detect both limitations and new lines of development in current cognitive theories that investigate the mind as an "intuitive statistician."

541 citations


Journal ArticleDOI
TL;DR: In this article, a conceptual framework is sketched to define cognitive growth, including language growth, as a process of growth under limited resources, and the model is transformed into a dynamic systems model based on the logistic growth equation.
Abstract: In the first part of the article, a conceptual framework is sketched to define cognitive growth, including language growth, as a process of growth under limited resources. Important concepts are the process, level, and rate of growth; minimal structural growth level; carrying capacity and unutilized capacity for growth; and feedback delay. Second, a mathematical model of cognitive growth under limited resources is presented, with the conclusion that the most plausible model is a model of logistic growth with delayed feedback. Third, the model is transformed into a dynamic systems model based on the logistic growth equation. This model describes cognitive growth as a system of supportive and competitive interactions between growers. Models of normal logistic growth, U-shaped growth, bootstrap growth, and competitive growth are also presented. An overview is presented of forms of adaptation of resources (e.g., parental and tutorial assistance and support) to the growth characteristics of a cognitive or linguistic competence. Finally, the question of how the model can account for stages of growth is discussed.

454 citations


Journal ArticleDOI
Lee Jussim1
TL;DR: In this article, a reflection-construction model of relations between social perception and social reality is presented, and the model is used to model the relationship between perception and reality in social perception.
Abstract: This article presents a reflection-construction model of relations between social perception and social reality.

Journal ArticleDOI
TL;DR: The origins of cerebral lateralization in humans are traced to the asymmetric prenatal development of the ear and labyrinth, whereas the failure to develop clear vestibular asymmetry may underlie the poor motoric lateralization found in several neurodevelopmental disorders.
Abstract: The origins of cerebral lateralization in humans are traced to the asymmetric prenatal development of the ear and labyrinth. Aural lateralization is hypothesized to result from an asymmetry in craniofacial development, whereas vestibular dominance is traced to the position of the fetus during the final trimester. A right-ear sensitivity advantage may contribute to a left-hemispheric advantage in speech perception and language functions, whereas left-otolithic dominance may independently promote right-sided motoric dominance and a right-hemispheric superiority in most visuospatial functions. The emergence of handedness is linked to the assumption of an upright posture in the early hominids, whereas the failure to develop clear vestibular asymmetry may underlie the poor motoric lateralization found in several neurodevelopmental disorders.

Journal ArticleDOI
TL;DR: The weaker form of the linguistic relativity hypothesis, which states that language influences thought, has been held to be so vague that it is unprovable as mentioned in this paper, and the argument presented herein is that the weaker Whorfian hypothesis can be quantified and thus evaluated.
Abstract: The linguistic relativity (Whorfian) hypothesis states that language influences thought. In its strongest form, the hypothesis states that language controls both thought and perception. Several experiments have shown that this is false. The weaker form of the hypothesis, which states that language influences thought, has been held to be so vague that it is unprovable. The argument presented herein is that the weaker Whorfian hypothesis can be quantified and thus evaluated. Models of cognition developed after Whorf 's day indicate ways in which thought can be influenced by cultural variations in the lexical, syntactical, semantic, and pragmatic aspects of language. Although much research remains to be done, there appears to be a great deal of truth to the linguistic relativity hypothesis. In many ways the language people speak is a guide to the language in which they think.

Journal ArticleDOI
TL;DR: It is hypothesized that food, which is certainly a necessary commodity with powerful positive reinforcing qualities, also provides a potential threat to organisms, including humans, and that defenses against eating too much may become activated inappropriately and contribute to clinical problems such as reactive hypoglycemia.
Abstract: It is hypothesized that food, which is certainly a necessary commodity with powerful positive reinforcing qualities, also provides a potential threat to organisms, including humans. The act of eating, although necessary for the provision of energy, is a particularly disruptive event in a homeostatic sense. Just as humans learn responses to help them tolerate the administration of dangerous drugs, so do they learn to make anticipatory responses that help minimize the impact of meals on the body, to limit the amount of food consumed within any individual meal, to recruit several parts of the protective stress-response system while meals are being processed, and to limit postprandial behaviors so as to minimize the possibility of disrupting homeostatic systems even more. It is further hypothesized that defenses against eating too much may become activated inappropriately and contribute to clinical problems such as reactive hypoglycemia.

Journal ArticleDOI
TL;DR: The theory of conditional proof as discussed by the authors is based on a set of pragmatic principles that govern how an if sentence is likely to be interpreted in context, and it is defined by a lexical entry that defines the information about if in semantic memory.
Abstract: The theory has 3 parts: (a) A lexical entry defines the information about if in semantic memory; its core comprises 2 inferences schemas, Modus Ponens and a schema for Conditional Proof; the latter operates under a constraint that explains differences between if and the material conditional of standard logic. (b) A propositional-logic reasoning program specifies a routine for reasoning from information as interpreted to a conclusion. (c) A set of pragmatic principles governs how an if sentence is likely to be interpreted in context

Journal ArticleDOI
TL;DR: In this paper, a general model of interpersonal perception based on Anderson's (1981) weighted average model is developed, which shows that increased acquaintance does not always lead to large changes in consensus.
Abstract: Consensus refers to the extent to which 2 judges agree in their ratings of a common target. A general model of interpersonal perception based on Anderson's (1981) weighted -average model is developed. The model shows that increased acquaintance does not always lead to large changes in consensus. Degree of overlap between the target behaviors observed by the judges and similarity of meaning systems are key but neglected parameters. The model can also be used as a basis for determinnig the accuracy of person perception. In some cases, accuracy can increase with greater acquaintance, whereas consensus may not

Journal ArticleDOI
TL;DR: This research shows how the absence of mediated priming coexists with the convergent priming necessary to account for mixed semantic-phonological speech errors and leads to the proposal that the language-production system may best be characterized as globally modular but locally interactive.
Abstract: Levelt et al. (1991) argued that modular semantic and phonological stage theories of lexical access in language production are to be preferred over interactive spreading-activation theories (e.g., Dell, 1986). As evidence, they show no mediated semantic-phonological priming during picture naming: Retrieval of sheep primes goat, but the activation of goat is not transmitted to its phonological relative, goal. This research reconciles this result with spreading-activation theories and shows how the absence of mediated priming coexists with the convergent priming necessary to account for mixed semantic-phonological speech errors. The analysis leads to the proposal that the language-production system may best be characterized as globally modular but locally interactive.

Journal ArticleDOI
TL;DR: A model that is capable of maintaining the identities of individuated elements as they move is described, and it solves a particular problem of underdetermination, the motion correspondence problem, by simultaneously applying 3 constraints: the nearest neighbor principle, the relative velocity principle, and the element integrity principle.
Abstract: A model that is capable of maintaining the identities of individuated elements as they move is described. It solves a particular problem of underdetermination, the motion correspondence problem, by simultaneously applying 3 constraints: the nearest neighbor principle, the relative velocity principle, and the element integrity principle. The model generates the same correspondence solutions as does the human visual system for a variety of displays, and many of its properties are consistent with what is known about the physiological mechanisms underlying human motion perception. The model can also be viewed as a proposal of how the identities of attentional tags are maintained by visual cognition, and thus it can be differentiated from a system that serves merely to detect movement.

Journal ArticleDOI
TL;DR: This sensitivity to line relations suggests that preattentive processes can extract 3-dimensional orientation from line drawings and a computational model is outlined for how this may be accomplished in early human vision.
Abstract: It has generally been assumed that rapid visual search is based on simple features and that spatial relations between features are irrelevant for this task Seven experiments involving search for line drawings contradict this assumption; a major determinant of search is the presence of line junctions Arrow- and Y-junctions were detected rapidly in isolation and when they were embedded in drawings of rectangular polyhedra Search for T-junctions was considerably slower Drawings containing T-junctions often gave rise to very slow search even when distinguishing arrow- or Y-junctions were present This sensitivity to line relations suggests that preattentive processes can extract 3-dimensional orientation from line drawings A computational model is outlined for how this may be accomplished in early human vision

Journal ArticleDOI
TL;DR: The present formulation integrates contingent, associative, and nonassociative tolerance and drug-opposite withdrawal reactions within a unified theory.
Abstract: At the heart of homeostatic theory is the idea that explicit or implicit behavioral demands placed on physiological systems are required for the biological detection of homeostatic disturbances. The detection of drug-induced disturbances is required to drive the development of all systemic tolerance, both associative and nonassociative (i.e., both forms of tolerance are behaviorally contingent). A wide range of findings ranging from morphine-induced analgesia to ethanol-induced hyposexuality shows that contingent tolerance is pervasive and may be universal. The theory also stipulates that behavioral demands placed on the target system will govern the loss of both associative and nonassociative tolerance (physiological). The present formulation integrates contingent, associative, and nonassociative tolerance and drug-opposite withdrawal reactions within a unified theory.

Journal ArticleDOI
TL;DR: The concept of nonlocal, or distributed, cortical and cognitive activation is examined for their usefulness in describing the relations between sleep and waking neurocognitive processes in this paper, where connectionist models are introduced so that neurophysiological and cognitive concepts of distributed and local activation and inhibition can be translated into a common language, and in so doing, are used to simulate several processes fundamental to the production of imaginal thought and dreaming.
Abstract: The concepts of nonlocal, or distributed, cortical and cognitive activation are examined for their usefulness in describing the relations between sleep and waking neurocognitive processes. Changes in the pattern of distributed activation and inhibition of selected portions of sensory, cognitive, and motor decision modules account for the differences in imagery and thought across sleep and waking states in comparable environments. The massive inhibition of sensory and proprioceptive input to perceptual modules in Stage 1 REM sleep leaves the perceptual and cognitive modules, by default, with their own output as their sole input. Given this constraint, the activation of portions of the cortical structures that execute waking perceptual, cognitive and motor responses is necessary and sufficient to produce the imagery and thought of dreaming sleep. Connectionist models are introduced so that neurophysiological and cognitive concepts of distributed and local activation and inhibition can be translated into a common language, and in so doing, are used to simulate several processes fundamental to the production of imaginal thought and dreaming.

Journal ArticleDOI
TL;DR: Experimental results on parallel and (equi) distance alleys in a frameless VS were reviewed, and Luneburg's interpretation on the discrepancy between these 2 alleys was sketched with emphasis on the 2 hypotheses involved.
Abstract: Visual space (VS) is a coherent self-organized dynamic complex that is structured into objects, backgrounds, and the self. As a concrete example of geometrical properties in VS, experimental results on parallel and (equi) distance alleys in a frameless VS were reviewed, and Luneburg's interpretation on the discrepancy between these 2 alleys was sketched with emphasis on the 2 hypotheses involved: VS is a Riemannian space of constant curvature (RCC) and the a priori assumed correspondence between VS and the physical space in which stimulus points are presented. Dissociating these 2 assumptions, the author tried to see to what extent the global structure of VS under natural conditions is in accordance with the hypothesis of RCC and to make explicit the logic underlying RCC. Several open questions about the geometry of VS per se have been enumerated.

Journal ArticleDOI
TL;DR: In this paper, an associative model of serial learning is described, based on the assumption that the effective stimulus for a serial-list item is generated by adaptation-level coding of the item's ordinal position.
Abstract: An associative model of serial learning is described. The model is based on the assumption that the effective stimulus for a serial-list item is generated by adaptation-level coding of the item's ordinal position

Journal ArticleDOI
TL;DR: The evidence is not consistent with the view that replete animals can reliably compose a nutritionally adequate diet from an array of food in a cafeteria situation as mentioned in this paper, excepting the case of sodium and, perhaps, of phosporus.
Abstract: The history of studies of dietary self-selection in cafeteria-feeding situations is reviewed briefly The evidence is not consistent with the view that replete animals can reliably compose a nutritionally adequate diet from an array of food in a cafeteria situation Similarly, the evidence is not consistent with the view that when deficient in a nutrient (excepting the case of sodium and, perhaps, of phosporus), animals can reliably select a nutritionally adequate diet when it is present among several deficient diets General acceptance of the view that omnivores can easily self-select nutritionally adequate diet is attributed both to overreliance on theory and to lack of critical attention to data The adequacy of functional arguments suggesting that omnivores must be able to self-select nutritionally adequate diets is questioned

Journal ArticleDOI
Janet Metcalfe1
TL;DR: Analysis of CHARM and comparisons to other models indicate that the recognition-failure function depends on both recognition and recall being similar (convolution-correlation) processes such that an interpretable representation is retrieved in both tasks.
Abstract: The relation between recognition and recall, and especially the orderly recognition-failure function relating recognition and the recognizability of recallable words, was investigated using a composite holographic associative recall-recognition memory model (CHARM) Ten series of computer simulations are presented Analysis of CHARM and comparisons to other models indicate that the recognition-failure function depends on (a) both recognition and recall being similar (convolution-correlation) processes such that an interpretable representation is retrieved in both tasks and (b) the information underlying both recall and recognition being stored in the same composite memory trace It is of considerable interest that constructs central to the distributed nature of CHARM are responsible for the model's adherence to the recognition-failure function

Journal ArticleDOI
TL;DR: In this paper, the authors argue that this interpretation is false and, in addition, that the model still cannot account for our data, and they adduce semantic rebound to the lemma level, where it is so substantial that it should have shown up in our data.
Abstract: In their comment, Dell and O'Seaghdha (1991) adduced any effect on phonological probes for semantic alternatives to the activation of these probes in the lexical network. We argue that that interpretation is false and, in addition, that the model still cannot account for our data. Further­ more, and different from Dell and O'Seaghda, we adduce semantic rebound to the lemma level, where it is so substantial that it should have shown up in our data. Finally, we question the function of feedback in a lexical network (other than eliciting speech errors) and discuss Dell's (1988) notion of a unified production-comprehension system. Until recently, models of lexical access in speech production were almost exclusively based on speech error data. This is true both for the modular two-stage models and for the interactive connectionist models of lexical access. Both kinds of models were initially designed to account for the distributions of natu­ rally observed or experimentall y elicited speech errors. From the start, however, they were conceived as process models of normal speech production. Therefore, the ultimate test of such models cannot lie in their account of infrequent derailments of the process. Rather, the proof of their efficacy should be sought in the account of the normal process itself. An exclusively errorbased approach to lexical access in speech production is as ill-conceived as an exclusively illusion-based approach in vision research. One should, of course, hope that an ultimate theory of the normal process also has the potential of explaining ob­ served error distributions (or visual illusions, for that matter), but it should not be one's main concern.

Journal ArticleDOI
TL;DR: In this article, a theory, the parallel force unit model, is advanced in which the buildup and decline of force in rapid responses of short duration are assumed to reflect variability in timing of several parallel force units.
Abstract: A theory, the parallel force unit model, is advanced in which the buildup and decline of force in rapid responses of short duration are assumed to reflect variability in timing of several parallel force units. Response force is conceived of as being a summation of a large number of force units, each acting independently of one another. Force is controlled by either the number of recruited force units or the duration each unit contributes its force. Several predictions are derived on the basis of this theory and are shown to be in qualitative agreement with empirical findings about both the mean and variability of brief force impulses. The model also has consequences for the temporal properties of a response. For example, under certain circumstances, it predicts a reciprocal relation between reaction time and response force. Although the theory is proposed as a psychological account, relations between the assumptions and basic principles in neurophysiology are considered. Possible future applications and generalizations of the theory are discussed.

Journal ArticleDOI
TL;DR: It is shown that impressions of dynamical quantities are not generally correlated with the values that these quantities take in the equations of motion but rather are highly correlated with simple ratios of kinematic quantities or with specific kinematics that do not specify the underlying dynamics.
Abstract: An inquiry into the origins of dynamical awareness is conducted. Particular attention is given to a theory that postulates that impressions of dynamical quantities are derived from and structured by lawful physical relations. It is shown that impressions of dynamical quantities are not generally correlated with the values that these quantities take in the equations of motion but rather are highly correlated with simple ratios of kinematic quantities or with specific kinematic features that do not specify the underlying dynamics. It is argued that kinematic information, to the extent that it is used, is used heuristically, and its availability for dynamical analysis is constrained by general principles of organization. A formal analysis of the physical organization implicit in the specification of dynamical invariants is given and compared with types of perceptual organization that are observed. The development of a framework for understanding what perceptual organization is and bow it is achieved is a central issue in perception. Information theory (Shannon, 1948) has provided a language for describing what perceptual organization accomplishes and has focused attention on the sentiment that information is, in some sense, the material of perception. The relationship between perception and information has been particularly stressed in the ecological approach (Gibson, 1979) through the twin notions that information is available in the environment of a perceiving animal and that perception itself is the pickup of useful information. The information-theoretic aspects of perception should, however, not be identified with ecological psychology; all fundamental theories of perception must reckon with the basic observation that perception is meaningful. Perceptual organization, whether it is mediated by pragnanz (Kohler, 1947), contingent on intelligent inference (Helmholtz, 1910/1962; Rock, 1983), or concomitant to socalled direct perception (Gibson, 1979), is at root the pickup of information through the reduction of ambiguity in proximal stimulation.

Journal ArticleDOI
TL;DR: Dannemiller's (1989) computational approach to color constancy is discussed and an alternative type of descriptor is available that is not used to recover reflectance spectra that has the advantage of allowing an interpretation that is preferable from a human perceptual point of view.
Abstract: Dannemiller's (1989) computational approach to color constancy is discussed in relation to human color constancy. A reflectance channel that requires a priori information is shown to be less plausible for the human visual system than Dannemiller argued. The resemblance of Dannemiller's hypothetical visual system to the human visual system is misleading because it implies that surface reflectance is the illuminant-invariant object color descriptor that the human visual system uses to achieve color constancy. However, an alternative type of descriptor is available that is not used to recover reflectance spectra. It has the advantage of allowing an interpretation that is preferable from a human perceptual point of view.