scispace - formally typeset
Search or ask a question

Showing papers in "Behavioral and Brain Sciences in 1980"


Journal ArticleDOI
TL;DR: Only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains, and no program by itself is sufficient for thinking.
Abstract: This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain. I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality. The main argument of this paper is directed at establishing this claim. The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences: (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4.“Could a machine think?” On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.

4,111 citations


Journal ArticleDOI
TL;DR: This review provides a critical framework within which two related topics are discussed: Do meaningful sex differences in verbal or spatial cerebral lateralization exist?
Abstract: Dual functional brain asymmetry refers to the notion that in most individuals the left cerebral hemisphere is specialized for language functions, whereas the right cerebral hemisphere is more important than the left for the perception, construction, and recall of stimuli that are difficult to verbalize. In the last twenty years there have been scattered reports of sex differences in degree of hemispheric specialization. This review provides a critical framework within which two related topics are discussed: Do meaningful sex differences in verbal or spatial cerebral lateralization exist? and, if so, Is the brain of one sex more symmetrically organized than the other? Data gathered on right-handed adults are examined from clinical studies of patients with unilateral brain lesions; from dichotic listening, tachistoscopic, and sensorimotor studies of functional asymmetries in non-brain-damaged subjects; from anatomical and electrophysiological investigations, as well as from the developmental literature. Retrospective and descriptive findings predominate over prospective and experimental methodologies. Nevertheless, there is an impressive accummulation of evidence suggesting that the male brain may be more asymmetrically organized than the female brain, both for verbal and nonverbal functions. These trends are rarely found in childhood but are often significant in the mature organism.

1,338 citations


Journal ArticleDOI
TL;DR: The distinction between representational and computational theories of mind is explored in this article, where it is argued that rational psychologists accept a formality condition on the specification of mental processes; naturalists do not.
Abstract: The paper explores the distinction between two doctrines, both of which inform theory construction in much of modern cognitive psychology: the representational theory of mind and the computational theory of mind. According to the former, propositional attitudes are to be construed as relations that organisms bear to mental representations. According to the latter, mental processes have access only to formal (nonsemantic) properties of the mental representations over which they are defined.The following claims are defended: (1) That the traditional dispute between “rational” and “naturalistic” psychology is plausibly viewed as an argument about the status of the computational theory of mind. Rational psychologists accept a formality condition on the specification of mental processes; naturalists do not. (2) That to accept the formality condition is to endorse a version of methodological solipsism. (3) That the acceptance of some such condition is warranted, at least for that part of psychology which concerns itself with theories of the mental causation of behavior. This is because: (4) such theories require nontransparent taxonomies of mental states; and (5) nontransparent taxonomies individuate mental states without reference to their semantic properties. Equivalently, (6) nontransparent taxonomies respect the way that the organism represents the object of its propositional attitudes to itself, and it is this representation which functions in the causation of behavior.The final section of the paper considers the prospect for a naturalistic psychology: one which defines its generalizations over relations between mental representations and their environmental causes, thus seeking to account for the semantic properties of propositional attitudes. Two related arguments are proposed, both leading to the conclusion that no such research strategy is likely to prove fruitful.

1,156 citations


Journal ArticleDOI
TL;DR: The cognitive impenetrability condition as discussed by the authors states that a function cannot be influenced by such purely cognitive factors as goals, beliefs, inferences, tacit knowledge, and so on.
Abstract: The computational view of mind rests on certain intuitions regarding the fundamental similarity between computation and cognition. We examine some of these intuitions and suggest that they derive from the fact that computers and human organisms are both physical systems whose behavior is correctly described as being governed by rules acting on symbolic representations. Some of the implications of this view are discussed. It is suggested that a fundamental hypothesis of this approach (the “proprietary vocabulary hypothesis”) is that there is a natural domain of human functioning (roughly what we intuitively associate with perceiving, reasoning, and acting) that can be addressed exclusively in terms of a formal symbolic or algorithmic vocabulary or level of analysis.Much of the paper elaborates various conditions that need to be met if a literal view of mental activity as computation is to serve as the basis for explanatory theories. The coherence of such a view depends on there being a principled distinction between functions whose explanation requires that we posit internal representations and those that we can appropriately describe as merely instantiating causal physical or biological laws. In this paper the distinction is empirically grounded in a methodological criterion called the “cognitive impenetrability condition.” Functions are said to be cognitively impenetrable if they cannot be influenced by such purely cognitive factors as goals, beliefs, inferences, tacit knowledge, and so on. Such a criterion makes it possible to empirically separate the fixed capacities of mind (called its “functional architecture”) from the particular representations and algorithms used on specific occasions. In order for computational theories to avoid being ad hoc, they must deal effectively with the “degrees of freedom” problem by constraining the extent to which they can be arbitrarily adjusted post hoc to fit some particular set of observations. This in turn requires that the fixed architectural function and the algorithms be independently validated. It is argued that the architectural assumptions implicit in many contemporary models run afoul of the cognitive impenetrability condition, since the required fixed functions are demonstrably sensitive to tacit knowledge and goals. The paper concludes with some tactical suggestions for the development of computational cognitive theories.

1,030 citations


Journal ArticleDOI
TL;DR: A model of fear and pain is presented in which the two are assumed to activate totally different classes of behavior, and it is assumed that fear triggers the endorphin mechanism, thereby inhibiting pain motivation and recuperative behaviors that might compete with effective defensive behavior.
Abstract: A model of fear and pain is presented in which the two are assumed to activate totally different classes of behavior. Fear, produced by stimuli that are associated with painful events, results in defensive behavior and the inhibition of pain and pain-related behaviors. On the other hand, pain, produced by injurious stimulation, motivates recuperative behaviors that promote healing. In this model injurious stimulation, on the one hand, and the expectation of injurious stimulation, on the other hand, activate entirely different motivational systems which serve entirely different functions. The fear motivation system activates defensive behavior, such as freezing and flight from a frightening situation, and its function is to defend the animal against natural dangers, such as predation. A further effect of fear motivation is to organize the perception of environmental events so as to facilitate the perception of danger and safety. The pain motivation system activates recuperative behaviors, including resting and body-care responses, and its function is to promote the animal's recovery from injury. Pain motivation also selectively facilitates the perception of nociceptive stimulation. Since the two kinds of motivation serve different and competitive functions, it might be expected that they would interact through some kind of mutual inhibition. Recent research is described which indicates that this is the case. The most important connection is the inhibition of pain by fear; fear has the top priority. This inhibition appears to be mediated by an endogenous analgesic mechanism involving the endorphins. The model assumes that fear triggers the endorphin mechanism, thereby inhibiting pain motivation and recuperative behaviors that might compete with effective defensive behavior.

761 citations


Journal ArticleDOI
James C. Lynch1
TL;DR: Evidence from clinical reports and from lesion and behavioral-electrophysiological experiments using monkeys is reviewed and discussed in relation to the overall functional organization of posterior parietal association cortex, and particularly with respect to a proposed posteriorParietal mechanism concerned with the initiation and control of certain classes of eye and limb movements.
Abstract: Posterior parietal cortex has traditionally been considered to be a sensory association area in which higher-order processing and intermodal integration of incoming sensory information occurs. In this paper, evidence from clinical reports and from lesion and behavioral-electrophysiological experiments using monkeys is reviewed and discussed in relation to the overall functional organization of posterior parietal association cortex, and particularly with respect to a proposed posterior parietal mechanism concerned with the initiation and control of certain classes of eye and limb movements. Preliminary data from studies of the effects of posterior parietal lesions on oculomotor control in monkeys are reported.The behavioral effects of lesions of posterior parietal cortex in monkeys have been found to be similar to those which follow analogous damage of the minor hemisphere in humans, while behavioral-electrophysiological experiments have disclosed classes of neurons in this area which have functional properties closely related to the behavioral acts that are disrupted by lesions of the area. On the basis of current data from these areas of study, it is proposed that the sensory association model of posterior parietal function is inadequate to account for the complexities of the present evidence. Instead, it now appears that many diverse neural mechanisms are located in part in parietal cortex, that some of these mechanisms are involved in sensory processing and perceptual functions, but that others participate in motor control, and that still others are involved in attentional, motivational, or emotional processes. It is further proposed that the elementary units of these various neural mechanisms are distributed within posterior parietal cortex according to the columnar hypothesis of Mountcastle.

507 citations


Journal ArticleDOI
TL;DR: Men are more likely than women to desire multiple mates; to desire a variety of sexual partners; to experience sexual jealousy of a spouse irrespective of specific circumstances; to be sexually aroused by the sight of a member of the other sex; and to experience an autonomous desire for sexual intercourse as mentioned in this paper.
Abstract: Patterns in the data on human sexuality support the hypothesis that the bases of sexual emotions are products of natural selection. Most generally, the universal existence of laws, rules, and gossip about sex, the pervasive interest in other people's sex lives, the widespread seeking of privacy for sexual intercourse, and the secrecy that normally permeates sexual conduct imply a history of reproductive competition. More specifically, the typical differences between men and women in sexual feelings can be explained most parsimoniously as resulting from the extraordinarily different reproductive opportunities and constraints males and females normally encountered during the course of evolutionary history. Men are more likely than women to desire multiple mates; to desire a variety of sexual partners; to experience sexual jealousy of a spouse irrespective of specific circumstances; to be sexually aroused by the sight of a member of the other sex; to experience an autonomous desire for sexual intercourse; and to evaluate sexual desirability primarily on the bases of physical appearance and youth.The evolutionary causes of human sexuality have been obscured by attempts to find harmony in natural creative processes and human social life and to view sex differences as complementary. The human female's capacity for orgasm and the loss of estrus, for example, have been persistently interpreted as marriage-maintaining adaptations. Available evidence is more consistent with the view that few sex differences in sexuality are complementary, that many aspects of sexuality undermine marriage, and that sexuality is less a unifying than a divisive force in human affairs.

467 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of visual perception of the three-dimensional shape of moving objects is examined, and a case study of the problem is presented to illustrate some of the inherent shortcomings of the immediate perception framework.
Abstract: Central to contemporary cognitive science is the notion that mental processes involve computations defined over internal representations. This view stands in sharp contrast to the “direct approach” to visual perception and cognition, whose most prominent proponent has been J.J. Gibson. In the direct theory, perception does not involve computations of any sort; it is the result of the direct pickup of available information. The publication of Gibson's recent book (Gibson 1979) offers an opportunity to examine his approach, and, more generally, to contrast the theory of direct perception with the computational/representational view. In the first part of the present article (Sections 2–3) the notion of “direct perception” is examined from a theoretical standpoint, and a number of objections are raised. Section 4 is a “case study”: the problem of perceiving the three-dimensional shape of moving objects is examined. This problem, which has been extensively studied within the immediate perception framework, serves to illustrate some of the inherent shortcomings of that approach. Finally, in Section 5, an attempt is made to place the theory of direct perception in perspective by embedding it in a more comprehensive framework.

419 citations


Journal ArticleDOI
TL;DR: A basic strategy that would provide sufficient information for neural modeling would include: identifying and characterizing each element in the CPG network; specifying the synaptic connectivity between the elements; and analyzing nonlinear synaptic properties and interactions by means of the connectivity matrix.
Abstract: Most rhythmic behaviors are produced by a specialized ensemble of neurons found in the central nervous system. These central pattern generators (CPGs) have become a cornerstone of neuronal circuit analysis. Studying simple invertebrate nervous systems may reveal the interactions of the neurons involved in the production of rhythmic motor output. There has recently been progress in this area, but due to certain intrinsic features of CPGs it is unlikely that present techniques will ever yield a complete understanding of any but the simplest of them. The chief impediment seems to be our inability to identify and characterize the total interneuronal pool making up a CPG. In addition, our general analytic strategy relies on a descriptive, reductionist approach, with no analytical constructs beyond phenomenological modeling. Detailed descriptive data are usually not of sufficient depth for specific model testing, giving rise instead to ad hoc explanations of mechanisms which usually turn out to be incorrect. Because they make too many assumptions, modeling studies have not added much to our understanding of CPCs; this is due not so much to inadequate simulations as to the poor quality and incomplete nature of the data provided by experimentalists.A basic strategy that would provide sufficient information for neural modeling would include: (1) identifying and characterizing each element in the CPG network; (2) specifying the synaptic connectivity between the elements; and (3) analyzing nonlinear synaptic properties and interactions by means of the connectivity matrix. Limitations based on our present technical capabilities are also discussed.

326 citations


Journal ArticleDOI
TL;DR: A subtheory of human intelligence based on the component construct, where components are higher-order control process used for planning how a problem should be solved, for making decisions regarding alternative courses of action during problem solving, and for monitoring solution processes.
Abstract: This article sketches a subtheory of human intelligence based on the component construct. Components differ in their levels of generality and in their functions. Metacomponents are higher-order control process used for planning how a problem should be solved, for making decisions regarding alternative courses of action during problem solving, and for monitoring solution processes. Performance components are processes used in the execution of a problem-solving strategy. Acquisition components are processes used in learning new information. Retention components are processes used in retrieving previously stored knowledge. Transfer components are used in generalization, that is, in carrying over knowledge from one task or task context to another. A mechanism for the interaction among components of different kinds and multiple components of the same kind can account for certain interesting aspects of laboratory and everyday problem solving. A brief historical overview of alternative basic units for understanding intelligence is followed by a detailed description of one of these units, the component and by a differentiation among various kinds of components. Examples of each kind of component are given, and the use of each of these components in a problem-solving situation is illustrated. Then, a system of interrelations among the various kinds of components is described. Finally, the functions of components in human intelligence are assessed by considering how the proposed subtheory can account for various empirical phenomena in the literature on human intelligence.

277 citations



Journal ArticleDOI
TL;DR: Four heuristic models are presented here to account for suicide in an evolutionary and sociobiological framework, suggesting that suicide should be tolerated by evolution when it has no effect on the gene pool.
Abstract: Human suicide presents a fundamental problem for the scientific analysis of behavior. This problem has been neither appreciated nor confronted by research and theory. Almost all other behavior exhibited by humans and nonhumans can be viewed as supporting the behaving organism's biological fitness and advancing the welfare of its genes. Yet suicide acts against these ends, and does so more directly and unequivocally than any other form of maladaptive behavior. Four heuristic models are presented here to account for suicide in an evolutionary and sociobiological framework. The first model attributes suicide to the extraordinary development of learning and cultural evolution in the human species. Learning may make human behavior so independent of biological constraints that it can occasionally assume a form entirely contrary to the principles of biological evolution. The second model attributes suicide to a breakdown of adaptive mechanisms in extremely stressful novel environments. The third model involves kin and group selection, arguing that in limited circumstances suicide may occur because of beneficial effects it has on other, surviving individuals who share the suicidal individual's genes. The last model suggests that suicide should be tolerated by evolution when it has no effect on the gene pool. This model holds particular promise in accounting for aspects of suicide not attributable to culture. The evidence indicates that suicide is most common in individuals who are unlikely to reproduce and unable to engage in productive activity; such individuals are least capable of promoting their genes. A complete explanation of suicide may derive only from an analysis of its biological significance.

Journal ArticleDOI
TL;DR: In this paper, a more realistic model of fluid ingestion was proposed, which tentatively suggests certain modifications toward a realistic model for fluid ingestion, which is not explicable by traditional homeostasis.
Abstract: of the original article: Drinking and thirst-motivated behaviour have traditionally been explained in terms of the rather simple concept of homeostasis. A homeostatic mechanism readily accounts for responses to acute changes in body-fluid levels. However, there are other factors regulating intake, for example, cues associated with eating, which interact with the time elapsed since last drinking and the availability of water. Future dehydration is avoided by behavioural hysteresis; a sudden reduction in fluid needs is not matched by an equivalent reduction in fluid intake. Another factor not explicable by traditional homeostasis is that, in general, drinking cannot be suppressed by water infusion. Nor are there rigid target values for body-fluid levels independent of the cost of obtaining water; when water is hard to get, a relatively low body fluid level is maintained, thus minimizing loss. On the basis of the results conflicting with traditional homeostatic theory, this paper tentatively suggests certain modifications toward a more realistic model of fluid ingestion.

Journal ArticleDOI
TL;DR: A review of empirical studies relevant to these two criteria reveals that the preponderance of evidence contradicts the popular belief that the standard tests most widely used at present are culturally biased against minorities.
Abstract: Most standard tests of intelligence and scholastic aptitude measure a general factor of cognitive ability that is common to all such tests – as well as to all complex tasks involving abstraction, reasoning, and problem-solving.The central question addressed by this inquiry is whether such tests are culturally biased in their discrimination between majority and minority groups in the United States with respect to the traditional uses of such tests in schools, college admissions, and personnel selection in industry and the armed forces.The fact that such tests discriminate statistically between various subpopulations does not itself indicate test bias. Acceptable criteria of bias are based on (1) the test's validity for predicting the performance (in school, on the job, and so on) of individuals from majority and minority groups, and (2) the internal consistency of the test with respect to relative item difficulty, factorial composition, and internal consistency/reliability.A review of empirical studies relevant to these two criteria reveals that the preponderance of evidence contradicts the popular belief that the standard tests most widely used at present are culturally biased against minorities. The tests have the same predictive validity for the practical uses of tests in all American-born, English-speaking racial and social groups in the United States.Factors in the test situation, such as the subject's “test-wiseness” and the race of the tester, are found to be negligible sources of racial group differences.