scispace - formally typeset
Search or ask a question

Showing papers in "Psychological Review in 1997"


Journal ArticleDOI
TL;DR: A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena.
Abstract: How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena. By inducing global knowledge indirectly from local co-occurrence data in a large body of representative text, LSA acquired knowledge about the full vocabulary of English at a comparable rate to schoolchildren. LSA uses no prior linguistic or perceptual similarity knowledge; it is based solely on a general mathematical learning method that achieves powerful inductive effects by extracting the right number of dimensions (e.g., 300) to represent objects and contexts. Relations to other theories, phenomena, and problems are sketched.

6,014 citations


Journal ArticleDOI
TL;DR: In this paper, a new theoretical framework, executive-process interactive control (EPIC), is introduced for characterizing human performance of concurrent perceptual-motor and cognitive tasks, and computational models may be formulated to simulate multiple-task performance under a variety of circumstances.
Abstract: A new theoretical framework, executive-process interactive control (EPIC), is introduced for characterizing human performance of concurrent perceptual-motor and cognitive tasks. On the basis of EPIC, computational models may be formulated to simulate multiple-task performance under a variety of circumstances. These models account well for reaction-time data from representative situations such as the psychological refractory-period procedure. EPIC's goodness of fit supports several key conclusions: (a) At a cognitive level, people can apply distinct sets of production rules simultaneously for executing the procedures of multiple tasks; (b) people's capacity to process information at "peripheral" perceptual-motor levels is limited; (c) to cope with such limits and to satisfy task priorities, flexible scheduling strategies are used; and (d) these strategies are mediated by executive cognitive processes that coordinate concurrent tasks adaptively.

1,296 citations


Journal ArticleDOI
TL;DR: An interactive 2-step theory of lexical retrieval was applied to the picture-naming error patterns of aphasic and nonaphasic speakers, arguing that simple quantitative alterations to a normal processing model can explain much of the variety among patient patterns in naming.
Abstract: An interactive 2-step theory of lexical retrieval was applied to the picture-naming error patterns of aphasic and nonaphasic speakers. The theory uses spreading activation in a lexical network to accomplish the mapping between the conceptual representation of an object and the phonological form of the word naming the object. A model developed from the theory was parameterized to fit normal error patterns. It was then "lesioned" by globally altering its connection weight, decay rates, or both to provide fits to the error patterns of 21 fluent aphasic patients. These fits were then used to derive predictions about the influence of syntactic categories on patient errors, the effect of phonology on semantic errors, error patterns after recovery, and patient performance on a singleword repetition task. The predictions were confirmed. It is argued that simple quantitative alterations to a normal processing model can explain much of the variety among patient patterns in naming.

1,208 citations


Journal ArticleDOI
TL;DR: Cheng and Novick as mentioned in this paper proposed a causal power theory of the probabilistic contrast model, which is based on covariation and causal power, and showed that causal relations are neither observable nor deducible.
Abstract: Because causal relations are neither observable nor deducible, they must be induced from observable events. The 2 dominant approaches to the psychology of causal induction—the covariation approach and the causal power approach—are each crippled by fundamental problems. This article proposes an integration of these approaches thai overcomes these problems. The proposal is that reasoners innately treat the relation between covariation (a function denned in terms of observable events) and causal power (an unobservable entity) as that between scientists' law or model and their theory explaining the model. This solution is formalized in the power PC theory, a causal power theory of the probabilistic contrast model (P. W. Cheng & L. R. Novick, 1990). The article reviews diverse old and new empirical tests discriminating this theory from previous models, none of which is justified by a theory. The results uniquely support the power PC theory.

1,119 citations


Journal ArticleDOI
TL;DR: An integrated theory of analogical access and mapping, instantiated in a computational model called LISA (Learning and Inference with Schemas and Analogies), suggesting that the architecture of LISA can provide computational explanations of properties of the human cognitive architecture.
Abstract: We describe an integrated theory of analogical access and mapping, instantiated in a computational model called LISA (Learning and Inference with Schemas and Analogies). LISA represents predicates and objects as distributed patterns of activation over units representing semantic primitives. These representations are dynamically bound into propositional structures, thereby achieving the structure-sensitivity of a symbolic system and the flexibility of a connectionist system. LISA also has a number of inherent limitations, including capacity limits and sensitivity to the manner in which a problem is represented. A key theoretical claim is that similar limitations also arise in human reasoning, suggesting that the architecture of LISA can provide computational explanations of properties of the human cognitive architecture. We report LISA's performance in simulating a wide range of empirical phenomena concerning human analogical access and mapping. The model treats both access and mapping as types of guided pattern classification, differing only in that mapping is augmented by a capacity to learn new correspondences. Extensions of the approach to account for analogical inference and schema induction are also discussed.

906 citations


Journal ArticleDOI
TL;DR: This neural diathesis-stress model is consistent with findings on prenatal factors and brain abnormalities in schizophrenia, and it provides a framework for explaining some key features of the developmental course and clinical presentation.
Abstract: There is a substantive literature on the behavioral effects of psychosocial stressors on schizophrenia. More recently, research has been conducted on neurohormonal indicators of stress responsivity, particularly cortisol release resulting from activation of the hypothalamic-pituitary-adrenal (HPA) axis. This article integrates the psychosocial and biological literatures on stress in schizophrenia, and it offers specific hypotheses about the neural mechanisms involved in the effects of stressors on the diathesis. Both the behavioral and biological data indicate that stress worsens symptoms and that the diathesis is associated with a heightened response to stressors. A neural mechanism for these phenomena is suggested by the augmenting effect of the HPA axis on dopamine (DA) synthesis and receptors. Assuming the diathesis for schizophrenia involves an abnormality in DA receptors, it is proposed that the HPA axis acts as a potentiating system by means of its effects on DA. At the same time, DA receptor abnormality and hippocampal damage render the patient hypersensitive to stress. This neural diathesis-stress model is consistent with findings on prenatal factors and brain abnormalities in schizophrenia, and it provides a framework for explaining some key features of the developmental course and clinical presentation.

842 citations


Journal ArticleDOI
TL;DR: The authors proposed an exemplar-based random walk model for predicting response times in tasks of speeded, multidimensional perceptual classification, which combines elements of R. M. Nosofsky's (1986) generalized context model of categorization and G. D. Logan's (1988) instance-based model of automaticity.
Abstract: The authors propose and test an exemplar-based random walk model for predicting response times in tasks of speeded, multidimensional perceptual classification. The model combines elements of R. M. Nosofsky's (1986) generalized context model of categorization and G. D. Logan's (1988) instance-based model of automaticity. In the model, exemplars race among one another to be retrieved from memory, with rates determined by their similarity to test items. The retrieved exemplars provide incremental information that enters into a random walk process for making classification decisions. The model predicts correctly effects of within- and between-categories similarity, individual-object familiarity, and extended practice on classification response times. It also builds bridges between the domains of categorization and automaticity.

624 citations


Journal ArticleDOI
TL;DR: Further simulations of multiple task performance have been conducted with computational models that are based on the Executive Process Interactive Control (EPIC) architecture for human information processing, which supports the claim of the present theoretical framework thatmultiple task performance relies on adaptive executive control.
Abstract: : Further simulations of multiple task performance have been conducted with computational models that are based on the Executive Process Interactive Control (EPIC) architecture for human information processing. These models account well for patterns of reaction times and psychological refractory period phenomena (delays of overt responses after short stimulus onset asynchronies) in a variety of laboratory paradigms and realistic situations. This supports the claim of the present theoretical framework that multiple task performance relies on adaptive executive control, which enables substantial amounts of temporal overlap among stimulus identification, response selection, and movement production processes for concurrent tasks. Such overlap is achieved through optimized task scheduling by flexible executive processes that satisfy prevailing instructions about task priorities and allocate limited capacity perceptual motor resources efficiently.

579 citations


Journal ArticleDOI
TL;DR: This paper developed a model that explains and predicts both longitudinal and cross-sectional variation in the output of major and minor creative products, including contrasts across creative domains in the expected career trajectories.
Abstract: The author developed a model that explains and predicts both longitudinal and cross-sectional variation in the output of major and minor creative products. The model first yields a mathematical equation that accounts for the empirical age curves, including contrasts across creative domains in the expected career trajectories. The model is then extended to account for individual differences in career trajectories, such as the longitudinal stability of cross-sectional variation and the differential placement of career landmarks (the ages at first, best, and last contribution). The theory is parsimonious in that it requires only two individual-difference parameters (initial creative potential and age at career onset) and two information-processing parameters (ideation and elaboration rates), plus a single principle (the equal-odds rule), to derive several precise predictions that cannot be generated by any alternative theory. Albert Einstein had around 248 publications to his credit, Charles Darwin had 119, and Sigmund Freud had 330, while Thomas Edison held 1,093 patents—still the record granted to any one person by the U.S. Patent Office. Similarly, Pablo Picasso executed more than 20,000 paintings, drawings, and pieces of sculpture, while Johann Sebastian Bach composed over 1,000 works, enough to require a lifetime of 40-hr weeks for a copyist just to write out the parts by hand. One might conclude from facts like these that exceptional productivity is a hallmark of outstanding creative individuals. And yet this induction may be contradicted by some curious exceptions and complications. Gregor Mendel managed to secure an enduring reputation on the basis of only seven scientific papers—considerably less than the 883 items claimed by the far more obscure naturalist John Edward Gray. Also, not all of the products that emerge from illustrious creators contribute credit to their names. Ludwig van Beethoven produced many compositions that only embarrass his admirers, just as William Shakespeare could write "problem plays" that are rarely performed today. Even Edison invented useless contraptions: The developmental costs for one failed device alone equaled all the profits he had earned from the electric light bulb! Now turn to another facet of the phenomenon: how creative productivity is distributed across the life span. Wolfgang Goethe began writing poetry in his teens, wrote a best-selling novel in his mid-20s, composed a series of successful plays in his 30s and 40s, and completed Parts I and II of Faust at ages 59 and 83, respectively. Hence, perhaps creators have careers characterized by precocity and longevity. Not all creative individuals show this pattern, however. On the one hand, some creators may exhibit comparable precocious achievement only to burn out early. Pietro Mascagni became famous at age 26 with the pro

576 citations


Journal ArticleDOI
TL;DR: The authors propose a new mechanism for prioritizing the selection of new events: visual marking and discusses the relations between marking and other accounts of visual selection and potential neurophysiological mechanisms.
Abstract: The authors propose a new mechanism for prioritizing the selection of new events: visual marking. In a modified conjunction search task the authors presented one set of distractors before the remaining items, which contained the target if present. Search was as efficient as if only the second items were presented. This held when eye movements were prevented and required a gap of 400 ms between the old and new items. The effect was abolished by luminance changes at old distractor locations when the new items appeared, and it was reduced by the addition of an attention demanding load task. The authors propose that old items can be ignored by spatially parallel, top-down attentional inhibition applied to the locations of static stimuli. The authors discuss the relations between marking and other accounts of visual selection and potential neurophysiological mechanisms.

426 citations


Journal ArticleDOI
TL;DR: In this paper, the judged probability of an explicit disjunction is less than or equal to the sum of the judged probabilities of its disjoint components (explicit subadditivity).
Abstract: Support theory represents probability judgment in terms of the support, or strength of evidence, of the focal relative to the alternative hypothesis. It assumes that the judged probability of an event generally increases when its description is unpacked into disjoint components (implicit subadditivity). This article presents a significant extension of the theory in which the judged probability of an explicit disjunction is less than or equal to the sum of the judged probabilities of its disjoint components (explicit subadditivity). Several studies of probability and frequency judgment demonstrate both implicit and explicit subadditivity. The former is attributed to enhanced availability, whereas the latter is attributed to repacking and anchoring.

Journal ArticleDOI
TL;DR: An adaptive process account in which knowledge is graded and embedded in specific behavioral processes is offered, which shows how this approach can account for success and failure in object permanence tasks without assuming principles and ancillary deficits.
Abstract: Infants seem sensitive to hidden objects in habituation tasks at 3.5 months but fail to retrieve hidden objects until 8 months. The authors first consider principle-based accounts of these successes and failures, in which early successes imply knowledge of principles and failures are attributed to ancillary deficits. One account is that infants younger than 8 months have the object permanence principle but lack means-ends abilities. To test this, 7-month-olds were trained on means-ends behaviors and were tested on retrieval of visible and occluded toys. Means-ends demands were the same, yet infants made more toy-guided retrievals in the visible case. The authors offer an adaptive process account in which knowledge is graded and embedded in specific behavioral processes. Simulation models that learn gradually to represent occluded objects show how this approach can account for success and failure in object permanence tasks without assuming principles and ancillary deficits.

Journal ArticleDOI
TL;DR: A functional analysis of serial order in language and a prediction that the fraction of serial-order errors that are anticipatory, as opposed to perseveratory, can be closely predicted by overall error rate are offered.
Abstract: In speech production, previously spoken and upcoming words can impinge on the word currently being said, resulting in perseverations (e.g., "beef needle soup") and anticipations (e.g., "cuff of coffee"). These errors reveal the extent to which the language-production system is focused on the past, the present, and the future and therefore are informative about how the system deals with serial order. This article offers a functional analysis of serial order in language and develops a general formal model. The centerpiece of the model is a prediction that the fraction of serial-order errors that are anticipatory, as opposed to perseveratory, can be closely predicted by overall error rate. The lower the error rate, the more anticipatory the errors are, regardless of the factors influencing error rate. The model is successfully applied to experimental and natural error data dealing with the effects of practice, speech rate, individual differences, age, and brain damage.

Journal ArticleDOI
TL;DR: In this article, a dynamic model of how animals learn to regulate their behavior under time-based reinforcement schedules is presented. But the model assumes a serial activation of behavioral states during the interreinforcement interval, an associative process linking the states with the operant response, and a rule mapping the activation of the states and their associative strength onto response rate or probability.
Abstract: This study presents a dynamic model of how animals learn to regulate their behavior under time-based reinforcement schedules. The model assumes a serial activation of behavioral states during the interreinforcement interval, an associative process linking the states with the operant response, and a rule mapping the activation of the states and their associative strength onto response rate or probability. The model fits data sets from fixed-interval schedules, the peak procedure, mixed fixed-interval schedules, and the bisection of temporal intervals. The major difficulties of the model came from experiments that suggest that under some conditions animals may time 2 intervals independently and simultaneously.

Journal ArticleDOI
TL;DR: The authors provide a new framework that integrates autobiographical memory with other early achievements (e.g., gesturing, language, concept formation) in a theory that arises as a natural consequence of developments in related domains including in the "software" that drives general memory functioning.
Abstract: The authors provide a new framework that integrates autobiographical memory with other early achievements (e.g., gesturing, language, concept formation). In this theory, the emergence and early development of autobiographical memory does not require the invocation of specialized neurological or multiple memory mechanisms but rather arises as a natural consequence of developments in related domains including in the "software" that drives general memory functioning. In particular, autobiographical memory emerges contemporaneously with the cognitive self, a knowledge structure whose features serve to organize memories of experiences that happened to "me." Because this cognitive self emerges in the 2nd year of life, the lower limit for early autobiographical memories is set at about 2 years, with subsequent accumulation of memories linked to improvements in children's ability to maintain information in storage.

Journal ArticleDOI
TL;DR: Three experiments indicate that the perceived randomness of a sequence is better predicted by various measures of its encoding difficulty than by its objective randomness, which seems to imply that in accordance with the complexity view, judging the extent of asequence's randomness is based on an attempt to mentally encode it.
Abstract: People attempting to generate random sequences usually produce more alternations than expected by chance. They also judge overalternating sequences as maximally random. In this article, the authors review findings, implications, and explanatory mechanisms concerning subjective randomness. The authors next present the general approach of the mathematical theory of complexity, which identifies the length of the shortest program for reproducing a sequence with its degree of randomness. They describe three experiments, based on mean group responses, indicating that the perceived randomness of a sequence is better predicted by various measures of its encoding difficulty than by its objective randomness. These results seem to imply that in accordance with the complexity view, judging the extent of a sequence's randomness is based on an attempt to mentally encode it. The experience of randomness may result when this attempt fails. Judging a situation as more or less random is often the key to important cognitions and behaviors. Perceiving a situation as nonchance calls for explanations, and it marks the onset of inductive inference (Lopes, 1982). Lawful environments encourage a coping orientation. One may try to control a situation by predicting its outcome, replicating, changing, or even by avoiding it. In contrast, there seems to be no point in patterning our behavior in a random environment. Although people feel that they know what they mean when speaking of randomness (Kac, 1983) and they communicate in everyday and professional affairs using their shared intuitive understanding of the term, it.is one of the most elusive concepts in mathematics. Randomness resists easy or precise definition, nor is there a decisive test for determining its presence (Ayton, Hunt, & Wright, 1989, 1991; Chaitin, 1975; Falk, 1991; Lopes, 1982; Pollatsek & Konold, 1991; Wagenaar, 1972a, 1991; Zabell, 1992). Attempted definitions of randomness involvo intricate philosophical and mathematical problems (Ayer, 1965;

Journal ArticleDOI
TL;DR: The authors showed that very young infants exhibit memory dissociations like those exhibited by adults with normal memory on analogous memory tests in response to manipulations of the same independent variables, demonstrating that implicit and explicit memory follow the same developmental timetable and challenge the utility of conscious recollection as the defining characteristic of explicit memory.
Abstract: Extending the Jacksonian principle of the hierarchical development and dissolution of function to the development and dissolution of memory, researchers have concluded that implicit (procedural) memory is a primitive system, functional shortly after birth, that processes information automatically, whereas explicit (declarative) memory matures late in the 1st year and mediates the conscious recollection of a prior event. Support for a developmental hierarchy has only been inferred from the memory performance of adults with amnesia on priming and recognition-recall tests in response to manipulations of different independent variables. This article reviews evidence that very young infants exhibit memory dissociations like those exhibited by adults with normal memory on analogous memory tests in response to manipulations of the same independent variables. These data demonstrate that implicit and explicit memory follow the same developmental timetable and challenge the utility of conscious recollection as the defining characteristic of explicit memory.

Journal ArticleDOI
TL;DR: In this paper, the Adaptive Character of Thought-Rational (ACT-R) production system (J. R. Anderson, 1993) is used to model how people recall serial lists.
Abstract: A theory is described that provides a detailed model of how people recall serial lists of items. This theory is based on the Adaptive Character of Thought-Rational (ACT-R) production system (J. R. Anderson, 1993). It assumes that serial lists are represented as hierarchical structures consisting of groups and items within groups. Declarative knowledge units encode the position of items and of groups within larger groups. Production rules use this positional information to organize the serial recall of a list of items. In ACT-R, memory access depends on a limited-capacity activation process, and errors can occur in the contents of recall because of a partial matching process. These limitations conspire in a number of ways to produce the limitations in immediate memory span: As the span increases, activation must be divided among more elements, activation decays more with longer recall times, and there are more opportunities for positional and acoustic confusions. The theory is shown to be capable of predicting both latency and error patterns in serial recall. It addresses effects of serial position, list length, delay, word length, positional confusion, acoustic confusion, and articulatory suppression.

Journal ArticleDOI
TL;DR: In this paper, an ideal-observer model that combines visual, lexical, and oculomotor information optimally to read simple texts in the minimum number of saccades is presented.
Abstract: The integration of visual, lexical, and oculomotor information is a critical part of reading. Mr. Chips is an ideal-observer model that combines these sources of information optimally to read simple texts in the minimum number of saccades. In the model, the concept of the visual span (the number of letters that can be identified in a single fixation) plays a key, unifying role. The behavior of the model provides a computational framework for reexamining the literature on human reading saccades. Emergent properties of the model, such as regressive saccades and an optimal-viewing position, suggest new interpretations of human behavior. Because Mr. Chip's "retina" can have any (one-dimensional) arrangement of high-resolution regions and scotomas, the model can simulate common visual disorders. Surprising saccade strategies are linked to the pattern of scotomas. For example, Mr. Chips sometimes plans a saccade that places a decisive letter in a scotoma. This article provides the first quantitative model of the effects of scotomas on reading.

Journal ArticleDOI
TL;DR: Simulations show that the D allele could have spread quite quickly through a population, given even a minuscule advantage of CD heterozygotes over CC and DD homozygotes in terms of reproductive fitness.
Abstract: At some point in hominid evolution, a mutation may have produced a "dextral" (D) allele, strongly biasing handedness in favor of the right hand and control of speech toward the left cerebral hemisphere. An alternative (chance [C]) allele is presumed directionally neutral, although there are probably other genes that influence asymmetries and that may create a weak bias toward right-handedness (and other asymmetries). Simulations show that the D allele could have spread quite quickly through a population, given even a minuscule advantage of CD heterozygotes over CC and DD homozygotes in terms of reproductive fitness. This heterozygotic advantage would also explain the apparent stability in the relative proportions of left-handers and right-handers. This putative, uniquely human allele may have emerged with the evolution of Homo sapiens in Africa some 150,000 to 200,000 years ago.


Journal ArticleDOI
TL;DR: The FACADE model as mentioned in this paper describes how geometrical and contrastive properties of a picture can either cooperate or compete when forming the boundaries and surface representation that subserve conscious percepts.
Abstract: This article develops the FACADE theory of 3-dimensional (3-D) vision and figure-ground separation to explain data concerning how 2-dimensional pictures give rise to 3-D percepts of occluding and occluded objects. The model describes how geometrical and contrastive properties of a picture can either cooperate or compete when forming the boundaries and surface representation that subserve conscious percepts. Spatially long-range cooperation and spatially short-range competition work together to separate the boundaries of occluding figures from their occluded neighbors. This boundary ownership process is sensitive to image T junctions at which occluded figures contact occluding figures. These boundaries control the filling-in of color within multiple depth-sensitive surface representations. Feedback between surface and boundary representations strengthens consistent boundaries while inhibiting inconsistent ones. Both the boundary and the surface representations of occluded objects may be amodally completed, while the surface representations of unoccluded objects become visible through modal completion. Functional roles for conscious modal and amodal representations in object recognition, spatial attention, and reaching behaviors are discussed. Model interactions are interpreted in terms of visual, temporal, and parietal cortices.

Journal ArticleDOI
TL;DR: A computational model is presented for 1 class of tasks dominated by Thurstonian uncertainty: sensory discrimination with pair comparisons that predicts decisions, confidence assessments, and the complex pattern of response times in simple psychophysical discrimination tasks (J.V. Baranski and W.M. Petrusic, 1994).
Abstract: As a preliminary step towards the presentation of a model of confidence in sensory discrimination, the authors propose a distinction between 2 different origins of uncertainty named after 2 of the great probabilists in the history of psychology, L.L. Thurstone and Egon Brunswik. The authors review data that suggest that there are empirical as well as conceptual differences between the 2 modes of uncertainty and thus that separate models of confidence are needed in tasks dominated by Thurstonian and Brunswikian uncertainty. The article presents a computational model for 1 class of tasks dominated by Thurstonian uncertainty: sensory discrimination with pair comparisons. The sensory sampling model predicts decisions, confidence assessments, and the complex pattern of response times in simple psychophysical discrimination tasks (J.V. Baranski and W.M. Petrusic, 1994). The model also accounts for the disposition towards underconfidence often observed in sensory discrimination with pair comparisons.


Journal ArticleDOI
TL;DR: The author proposes that many forms of memory distortion, including the progressive changes in recollection of a learning experience often observed over successive tests, are due to the same processes that yield veridical recollection in some circumstances and memory loss and recovery in others.
Abstract: The author proposes that many forms of memory distortion, including the progressive changes in recollection of a learning experience often observed over successive tests, are due to the same processes that yield veridical recollection in some circumstances and memory loss and recovery in others. In a framework for interpreting all of these aspects of memory, the author assumes that the objects and events of a learning experience are encoded in parallel in traces of their perceptual attributes, which are basic to recognition, and in traces of reactions made to the events during or following learning, which are basic to recall. Random perturbation of remembered attribute values in both types of traces over retention intervals is a pervasive cause of both loss and distortions of memory.

Journal ArticleDOI
TL;DR: A model for the identification of briefly presented words is presented and implications of the model for research in implicit memory are considered.
Abstract: A model for the identification of briefly presented words is presented The model accounts for data from naming and forced-choice experiments in which factors such as similarity of alternatives and stimulus presentation time are varied The model assumes that counts are accumulated in counters that correspond to words and that a word is chosen as a response when the number of counts in its counter exceeds the maximum of the numbers of counts in other counters by a criterial value Prior exposure to a word causes its counter to attract more counts than it otherwise would, and this yields priming effects Ten experiments are presented, and the model provides correct predictions for the data Implications of the model for research in implicit memory are considered

Journal ArticleDOI
TL;DR: A model that extrapolates the biological consequences of drug administration to account for acute and chronic tolerance and shows how neural responses following drug administration are often obscured and provides a reinterpretation of drug conditioning paradigms is presented.
Abstract: The authors presented a model that extrapolates the biological consequences of drug administration to account for acute and chronic tolerance. Drug-induced changes of regulated parameters provide detectable perturbations to which the brain responds. With experience, these centrally mediated responses are learned and can be activated in the absence of the drug-induced perturbation. Although neural responses following drug administration are often obscured, the model shows how these responses may be identified and provides a reinterpretation of drug conditioning paradigms. The authors made comparisons between the present empirical model of drug administration and existing theories of drug tolerance. The authors also presented a unified framework for understanding the consequences of repeated drug use and made specific predictions as to the relationships among acute and chronic tolerance, drug sensitization, and individual differences in vulnerability to drug addiction.

Journal ArticleDOI
TL;DR: A critical reexamination of the relevant nonhuman and human evidence suggests that although the development of a cerebral lateralization for speech and handedness is dependent on both genetic and environmental factors, the specific role of inborn and postnatal influences is very different.
Abstract: Functional predominance of the left cerebral hemisphere with regard to both handedness and speech has usually been assumed to be due to some underlying neural specialization that is predetermined and inborn. However, data from left-handed individuals and animal experiments, together with a consideration of the effects of natural selection on brain and behavior during hominid evolution, are incompatible with such an explanation. A critical reexamination of the relevant nonhuman and human evidence suggests that although the development of a cerebral lateralization for speech and handedness is dependent on both genetic and environmental factors, the specific role of inborn and postnatal influences is very different. This has significant implications for a fundamental revision of current theory and research orientation.

Journal ArticleDOI
TL;DR: Breitmeyer and Ganz as mentioned in this paper used the boundary contour model to explain the characteristics of metacontrast visual masking, showing that masking strength depends on the amount and distribution of contour in the mask; a second mask can disinhibit the masking of the target; and such disinhibition depends on spatial separation of the 2 masks.
Abstract: The dynamic properties of a neural network model of visual perception, called the boundary contour system, explain characteristics of metacontrast visual masking. Computer simulations of the model, with a single set of parameters, demonstrate that it accounts for 9 key properties of metacontrast masking: Metacontrast masking is strongest at positive stimulus onset asynchronies (SOAs) ; decreasing target luminance changes the shape of the masking curve; increasing target duration weakens masking; masking effects weaken with spatial separation; increasing mask duration leads to stronger masking at shorter SOAs; masking strength depends on the amount and distribution of contour in the mask; a second mask can disinhibit the masking of the target; such disinhibition depends on the SOA of the 2 masks; and such disinhibition depends on the spatial separation of the 2 masks. No other theory provides a unified explanation of these data sets. Additionally, the model suggests a new analysis of data related to the SOA law and makes several testable predictions. A metacontrast masking display consists of a briefly flashed visual target (often a filled circle or a bar) followed by a masking stimulus (a surrounding annulus or two flanking bars). In such a display the target is perceptually weaker (dimmer), and in some cases participants fail to perceive the target at all. Perhaps most remarkable, in many cases the strongest masking effect occurs not with simultaneous onset of the target and mask, but at a positive stimulus onset asynchrony (SOA). The effect of SOA is surprising because if simple lateral inhibition produced this type of masking, then the strongest interactions between target and mask would occur with simultaneous onset. With a positive SOA, it would seem that the information about the target would have moved (to higher visual areas) beyond any influence of the mask. For this reason metacontrast masking is also often called backward masking, thus indicating the apparent ability of the mask to influence percepts of the target by going backward in time. When the mask precedes the target (forward, or paracontrast, masking) there is little masking. Currently, the most accepted account of metacontrast masking is the one proposed in various forms by Weisstein (1972); Matin ( 1975 ) ; Weisstein, Ozog, and Szoc ( 1975 ) ; and Breitmeyer and Ganz (1976). They suggested that interactions of transientsustained inhibition explained many properties of dynamical vision, including metacontrast masking. Breitmeyer (1984) discussed how the theory is consistent with an impressive amount of psychophysical and neurophysiological data on visual masking. The theory proposes that fast-acting transient cells in visual

Journal ArticleDOI
TL;DR: The authors used the reiteration effect to explain the asymmetry in hindsight bias for true and false assertions, and showed that recalling confidence will increase in regret bias studies even if no feedback is given.
Abstract: Repetition of an assertion increases the degree of belief in that assertion. This reiteration effect is used to explain two puzzling findings in research on hindsight bias. First, the reiteration effect explains the asymmetry in hindsight bias for true and false assertions. This striking asymmetry has often been observed in experimental studies, but no rationale has yet been found. Second, the reiteration effect predicts a novel finding: Recalled confidence will increase in hindsight bias studies even if no feedback is given. The authors have checked both predictions against results reported in the literature; with some exceptions, the evidence supports them.