scispace - formally typeset
Search or ask a question

Showing papers in "Psychological Review in 2000"


Journal Article•DOI•
TL;DR: It is shown how this model can be used to draw together a wide range of diverse data from cognitive, social, developmental, personality, clinical, and neuropsychological autobiographical memory research.
Abstract: The authors describe a model of autobiographical memory in which memories are transitory mental constructions within a self-memory system (SMS). The SMS contains an autobiographical knowledge base and current goals of the working self. Within the SMS, control processes modulate access to the knowledge base by successively shaping cues used to activate autobiographical memory knowledge structures and, in this way, form specific memories. The relation of the knowledge base to active goals is reciprocal, and the knowledge base "grounds" the goals of the working self. It is shown how this model can be used to draw together a wide range of diverse data from cognitive, social, developmental, personality, clinical, and neuropsychological autobiographical memory research.

3,375 citations


Journal Article•DOI•
TL;DR: It is proposed that, behaviorally, females' responses to stress are more marked by a pattern of "tend-and-befriend," and neuroendocrine evidence from animal and human studies suggests that oxytocin, in conjunction with female reproductive hormones and endogenous opioid peptide mechanisms, may be at its core.
Abstract: The human stress response has been characterized, both physiologically and behaviorally, as "fight-or-flight." Although fight-or-flight may characterize the primary physiological responses to stress for both males and females, we propose that, behaviorally, females' responses are more marked by a pattern of "tend-and-befriend." Tending involves nurturant activities designed to protect the self and offspring that promote safety and reduce distress; befriending is the creation and maintenance of social networks that may aid in this process. The biobehavioral mechanism that underlies the tend-and-befriend pattern appears to draw on the attachment-caregiving system, and neuroendocrine evidence from animal and human studies suggests that oxytocin, in conjunction with female reproductive hormones and endogenous opioid peptide mechanisms, may be at its core. This previously unexplored stress regulatory system has manifold implications for the study of stress.

2,588 citations


Journal Article•DOI•
TL;DR: The authors argue that a new attitude can override, but not replace, the old one, resulting in dual attitudes, and the implications of the dual-attitude model for attitude theory and measurement are discussed.
Abstract: When an attitude changes from A1 to A2, what happens to A1? Most theories assume, at least implicitly, that the new attitude replaces the former one. The authors argue that a new attitude can override, but not replace, the old one, resulting in dual attitudes. Dual attitudes are defined as different evaluations of the same attitude object: an automatic, implicit attitude and an explicit attitude. The attitude that people endorse depends on whether they have the cognitive capacity to retrieve the explicit attitude and whether this overrides their implicit attitude. A number of literatures consistent with these hypotheses are reviewed, and the implications of the dual-attitude model for attitude theory and measurement are discussed. For example, by including only explicit measures, previous studies may have exaggerated the ease with which people change their attitudes. Even if an explicit attitude changes, an implicit attitude can remain the same.

2,012 citations


Journal Article•DOI•
TL;DR: The authors draw together and develop previous timing models for a broad range of conditioning phenomena to reveal their common conceptual foundations: first, conditioning depends on the learning of the temporal intervals between events and the reciprocals of these intervals, the rates of event occurrence.
Abstract: The authors draw together and develop previous timing models for a broad range of conditioning phenomena to reveal their common conceptual foundations: First, conditioning depends on the learning of the temporal intervals between events and the reciprocals of these intervals, the rates of event occurrence. Second, remembered intervals and rates translate into observed behavior through decision processes whose structure is adapted to noise in the decision variables. The noise and the uncertainties consequent on it have both subjective and objective origins. A third feature of these models is their timescale invariance, which the authors argue is a very important property evident in the available experimental data. This conceptual framework is similar to the psychophysical conceptual framework in which contemporary models of sensory processing are rooted. The authors contrast it with the associative conceptual framework.

1,012 citations


Journal Article•DOI•
TL;DR: The use of good fits as evidence is not supported by philosophers of science nor by the history of psychology; there seem to be no examples of a theory supported mainly by good fits that has led to demonstrable progress as mentioned in this paper.
Abstract: Quantitative theories with free parameters often gain credence when they closely fit data. This is a mistake. A good fit reveals nothing about the flexibility of the theory (how much it cannot fit), the variability of the data (how firmly the data rule out what the theory cannot fit), or the likelihood of other outcomes (perhaps the theory could have fit any plausible result), and a reader needs all 3 pieces of information to decide how much the fit should increase belief in the theory. The use of good fits as evidence is not supported by philosophers of science nor by the history of psychology; there seem to be no examples of a theory supported mainly by good fits that has led to demonstrable progress. A better way to test a theory with free parameters is to determine how the theory constrains possible outcomes (i.e., what it predicts), assess how firmly actual outcomes agree with those constraints, and determine if plausible alternative outcomes would have been inconsistent with the theory, allowing for the variability of the data.

722 citations


Journal Article•DOI•
TL;DR: A computational model of human memory for serial order is described (OSCillator-based Associative Recall [OSCAR]; in the model, successive list items become associated to successive states of a dynamic learning-context signal.
Abstract: A computational model of human memory for serial order is described (OSCillator-based Associative Recall [OSCAR]). In the model, successive list items become associated to successive states of a dynamic learning-context signal. Retrieval involves reinstatement of the learning context, successive states of which cue successive recalls. The model provides an integrated account of both item memory and order memory and allows the hierarchical representation of temporal order information. The model accounts for a wide range of serial order memory data, including differential item and order memory, transposition gradients, item similarity effects, the effects of item lag and separation in judgments of relative and absolute recency, probed serial recall data, distinctiveness effects, grouping effects at various temporal resolutions, longer term memory for serial order, list length effects, and the effects of vocabulary size on serial recall.

677 citations


Journal Article•DOI•
TL;DR: Five theories of spoken word production that differ along the discreteness-interactivity dimension are evaluated and the role that cascading activation, feedback, seriality, and interaction domains play in accounting for a set of fundamental observations derived from patterns of speech errors is examined.
Abstract: Five theories of spoken word production that differ along the discreteness-interactivity dimension are evaluated. Specifically examined is the role that cascading activation, feedback, seriality, and interaction domains play in accounting for a set of fundamental observations derived from patterns of speech errors produced by normal and brain-damaged individuals. After reviewing the evidence from normal speech errors, case studies of 3 brain-damaged individuals with acquired naming deficits are presented. The patterns these individuals exhibit provide important constraints on theories of spoken naming. With the help of computer simulations of the 5 theories, the authors evaluate the extent to which the error patterns predicted by each theory conform with the empirical facts. The results support a theory of spoken word production that, although interactive, places important restrictions on the extent and locus of interactivity.

497 citations


Journal Article•DOI•
TL;DR: A cognitive-ecological approach to judgment biases is presented and substantiated by recent empirical evidence as discussed by the authors, where alternative accounts are offered for a number of judgment biases, such as base-rate neglect, confirmation bias, illusory correlation, pseudo-contingency, Simpson's paradox, outgroup devaluation, and pragmatic-confusion effects.
Abstract: A cognitive-ecological approach to judgment biases is presented and substantiated by recent empirical evidence. Latent properties of the environment are not amenable to direct assessment but have to be inferred from empirical samples that provide the interface between cognition and the environment. The sampling process may draw on the external world or on internal memories. For systematic reasons (proximity, salience, and focus of attention), the resulting samples tend to be biased (selective, skewed, or conditional on information search strategies). Because people lack the metacognitive ability to understand and control for sampling constraints (predictor sampling, criterion sampling, selective-outcome sampling, etc.), the sampling biases carry over to subsequent judgments. Within this framework, alternative accounts are offered for a number of judgment biases, such as base-rate neglect, confirmation bias, illusory correlation, pseudo-contingency, Simpson's paradox, outgroup devaluation, and pragmatic-confusion effects.

374 citations


Journal Article•DOI•
TL;DR: This work demonstrates the acquisition of implicit knowledge of tonal structure through neural self-organization resulting from mere exposure to simultaneous and sequential combinations of tones.
Abstract: Tonal music is a highly structured system that is ubiquitous in our cultural environment. We demonstrate the acquisition of implicit knowledge of tonal structure through neural self-organization resulting from mere exposure to simultaneous and sequential combinations of tones. In the process of learning, a network with fundamental neural constraints comes to internalize the essential correlational structure of tonal music. After learning, the network was run through a range of experiments from the literature. The model provides a parsimonious account of a variety of empirical findings dealing with the processing of tone, chord, and key relationships, including relatedness judgments, memory judgments, and expectancies. It also illustrates the plausibility of activation being a unifying mechanism underlying a range of cognitive tasks.

366 citations


Journal Article•DOI•
TL;DR: The authors provide empirical and computational support for a single-mechanism distributed network account and provide an account of these results in terms of the properties of distributed network models and support this account with an explicit computational simulation.
Abstract: Existing accounts of single-word semantic priming phenomena incorporate multiple mechanisms, such as spreading activation, expectancy-based processes, and postlexical semantic matching. The authors provide empirical and computational support for a single-mechanism distributed network account. Previous studies have found greater semantic priming for low- than for high-frequency target words as well as inhibition following unrelated primes only at long stimulus-onset asynchronies (SOAs). A series of experiments examined the modulation of these effects by individual differences in age or perceptual ability. Third-grade, 6th-grade, and college students performed a lexical-decision task on high- and low-frequency target words preceded by related, unrelated, and nonword primes. Greater priming for low-frequency targets was exhibited only by participants with high perceptual ability. Moreover, unlike the college students, the children showed no inhibition even at the long SOA. The authors provide an account of these results in terms of the properties of distributed network models and support this account with an explicit computational simulation.

346 citations


Journal Article•DOI•
TL;DR: This paper showed that the hard-easy effect has been interpreted with insufficient attention to the scale-end effects, the linear dependency, and the regression effects in data and that the continued adherence to the idea of a "cognitive overconfidence bias" is mediated by selective attention to particular data sets.
Abstract: Two robust phenomena in research on confidence in one's general knowledge are the overconfidence phenomenon and the hard-easy effect. In this article, the authors propose that the hard-easy effect has been interpreted with insufficient attention to the scale-end effects, the linear dependency, and the regression effects in data and that the continued adherence to the idea of a "cognitive overconfidence bias" is mediated by selective attention to particular data sets. A quantitative review of studies with 2-alternative general knowledge items demonstrates that, contrary to widespread belief, there is (a) very little support for a cognitive-processing bias in these data; (b) a difference between representative and selected item samples that is not reducible to the difference in difficulty; and (c) near elimination of the hard-easy effect when there is control for scale-end effects and linear dependency.

Journal Article•DOI•
TL;DR: In this article, a multinomial model is used to disentangle the respective contributions of reasoning processes and response bias in conclusion-acceptance data that exhibit belief bias, and a new theory of belief bias is proposed that assumes that most reasoners construct only one mental model representing the premises as well as the conclusion or, in the case of an unbelievable conclusion, its logical negation.
Abstract: A multinomial model is used to disentangle the respective contributions of reasoning processes and response bias in conclusion-acceptance data that exhibit belief bias. A model-based meta-analysis of 22 studies reveals that such data are structurally too sparse to allow discrimination of different accounts of belief bias. Four experiments are conducted to obtain richer data, allowing deeper tests through the use of the multinomial model. None of the current accounts of belief bias is consistent with the complex pattern of results. A new theory of belief bias is proposed that assumes that most reasoners construct only one mental model representing the premises as well as the conclusion or, in the case of an unbelievable conclusion, its logical negation. New predictions derived from the theory are confirmed in 4 additional studies.

Journal Article•DOI•
Koen Lamberts1•
TL;DR: In this article, a process model of perceptual categorization is presented, in which it is assumed that the earliest stages of categorization involve gradual accumulation of information about object features, and the model provides a joint account of categorisation choice proportions and response times by assuming that the probability that the information accumulation process stops at a given time after stimulus presentation is a function of the stimulus information that has been acquired.
Abstract: A process model of perceptual categorization is presented, in which it is assumed that the earliest stages of categorization involve gradual accumulation of information about object features. The model provides a joint account of categorization choice proportions and response times by assuming that the probability that the information-accumulation process stops at a given time after stimulus presentation is a function of the stimulus information that has been acquired. The model provides an accurate account of categorization response times for integral-dimension stimuli and for separable-dimension stimuli, and it also explains effects of response deadlines and exemplar frequency.

Journal Article•DOI•
TL;DR: A theoretical analysis of the sampling distribution of phi indicated that, for strong correlations, sample sizes of 7 +/- 2 are most likely to produce a sample correlation that is more extreme than that of the population.
Abstract: Capacity limitations of working memory force people to rely on samples consisting of 7 +/- 2 items. The implications of these limitations for the early detection of correlations between binary variables were explored in a theoretical analysis of the sampling distribution of phi, the contingency coefficient. The analysis indicated that, for strong correlations (phi > .50), sample sizes of 7 +/- 2 are most likely to produce a sample correlation that is more extreme than that of the population. Another analysis then revealed that there is a similar cutoff point at which useful correlations (i.e., for which each variable is a valid predictor of the other) first outnumber correlations for which this is not the case. Capacity limitations are thus shown to maximize the chances for the early detection of strong and useful relations.

Journal Article•DOI•
TL;DR: A cyclical power model derived from Stevens' power law is proposed that predicts observed asymmetries in bias patterns when the set of reference points varies across trials and two experiments confirming the model's assumptions are described.
Abstract: When participants make part-whole proportion judgments, systematic bias is commonly observed. In some studies, small proportions are overestimated and large proportions underestimated; in other studies, the reverse pattern occurs. Sometimes the bias pattern repeats cyclically with a higher frequency (e.g., overestimation of proportions less than .25 and between .5 and .75; underestimation otherwise). To account for the various bias patterns, a cyclical power model was derived from Stevens' power law. The model proposes that the amplitude of the bias pattern is determined by the Stevens exponent, beta (i.e., the stimulus continuum being judged), and that the frequency of the pattern is determined by a choice of intermediate reference points in the stimulus. When beta 1, the under-then-over pattern is predicted. Two experiments confirming the model's assumptions are described. A mixed-cycle version of the model is also proposed that predicts observed asymmetries in bias patterns when the set of reference points varies across trials.

Journal Article•DOI•
TL;DR: The authors show that split architectures facilitate the uptake of all the letter-position information necessary for efficient word recognition and that this information may be less specific than is normally assumed.
Abstract: Some of the implications of a model of visual word recognition in which processing is conditioned by the anatomical splitting of the visual field between the two hemispheres of the brain are explored. The authors investigate the optimal processing of visually presented words within such an architecture, and, for a realistically sized lexicon of English, characterize a computationally optimal fixation point in reading. They demonstrate that this approach motivates a range of behavior observed in reading isolated words and text, including the optimal viewing position and its relationship with the preferred viewing location, the failure to fixate smaller words, asymmetries in hemisphere-specific processing, and the priority given to the exterior letters of words. The authors also show that split architectures facilitate the uptake of all the letter-position information necessary for efficient word recognition and that this information may be less specific than is normally assumed. A split model of word recognition captures a range of behavior in reading that is greater than that covered by existing models of visual word recognition.

Journal Article•DOI•
TL;DR: In this paper, the authors show how mutual influences among elements of self-relevant information give rise to dynamism, differentiation, and global evaluation in self-concept, and show that external information of a random nature may enhance the emergence of a stable self-structure in an initially disordered system.
Abstract: Using cellular automata, the authors show how mutual influences among elements of self-relevant information give rise to dynamism, differentiation, and global evaluation in self-concept. The model assumes a press for integration that promotes internally generated dynamics and enables the self-structure to operate as a self-organizing dynamical system. When this press is set at high values, the self can resist inconsistent information and reestablish equilibrium after being perturbed by such information. A weak press for integration, on the other hand, impairs self-organization tendencies, making the system vulnerable to external information. Paradoxically, external information of a random nature may enhance the emergence of a stable self-structure in an initially disordered system. The simulation results suggest that important global properties of the self reflect the operation of integration processes that are generic in complex systems.

Journal Article•DOI•
TL;DR: A consideration of prior work concerned with investigating the conditions under which participants are and are not inclined to adjust the decision criterion suggests that the criterion-shift account of false memory is unlikely to be correct.
Abstract: M. B. Miller and G. L. Wolford (1999) argued that the high false-alarm rate associated with critical lures in the Roediger-McDermott (H. L. Roediger & K. B. McDermott, 1995) paradigm results from a criterion shift and therefore does not reflect false memory. This conclusion, which is based on new data reported by Miller and Wolford, overlooks the fact that Roediger and McDermott's false-memory account is as compatible with the new findings as the criterion-shift account is. Furthermore, a consideration of prior work concerned with investigating the conditions under which participants are and are not inclined to adjust the decision criterion suggests that the criterion-shift account of false memory is unlikely to be correct.

Journal Article•DOI•
TL;DR: Visual perception of dynamic properties: Cue heuristics versus direct-perceptual competence versus cue heuristic competence is compared.
Abstract: Visual perception of dynamic properties: Cue heuristics versus direct-perceptual competence

Journal Article•DOI•
TL;DR: The authors empirically evaluate P. W. Cheng's (1997) power PC theory of causal induction and reanalyze some published data taken to support the theory and show instead that the data are at variance with it.
Abstract: The authors empirically evaluate P. W. Cheng's (1997) power PC theory of causal induction. They reanalyze some published data taken to support the theory and show instead that the data are at variance with it. Then, they report 6 experiments in which participants evaluated the causal relationship between a fictitious chemical and DNA mutations. The power PC theory assumes that participants' estimates are based on the causal power p of a potential cause, where p is the contingency between the cause and the effect normalized by the base rate of the effect. Three of the experiments used a procedure in which causal information was presented trial by trial. For these experiments, the power PC theory was contrasted with the predictions of the probabilistic contrast model and the Rescorla-Wagner theory. For the remaining 3 experiments, a summary presentation format was employed to which only the probabilistic contrast model and the power PC theory are applicable. The power PC theory was unequivocally contradicted by the results obtained in these experiments, whereas the other 2 theories proved to be satisfactory.

Journal Article•DOI•
TL;DR: In this paper, Vicente and Wang's critique of the generalizability of the LTWM framework is rejected, and the process-based framework is shown to be superior to their product theory because it can explain interactions of the expertise effect in "contrived" recall under several testing conditions differing in presentation rate, instructions, and memory procedures.
Abstract: K. A. Ericsson and W. Kintsch's (1995) theoretical framework of long-term working memory (LTWM) accounts for how experts acquire encoding and retrieval mechanisms to adapt to real-time demands of working memory during representative interactions with their natural environments. The transfer of the same LTWM mechanisms is shown to account for the expertise effect in unrepresentative "contrived" memory tests. Therefore, K. J. Vicente and J. H. Wang's (1998) critique of the generalizability of the LTWM framework is rejected. Their proposed refutation of LTWM accounts is found to be based on misrepresented facts. The process-based framework of LTWM is shown to be superior to their product theory because it can explain interactions of the expertise effect in "contrived" recall under several testing conditions differing in presentation rate, instructions, and memory procedures.

Journal Article•DOI•
TL;DR: The ARTWORD neural model as mentioned in this paper quantitatively simulates such context-sensitive speech data by sequentially storing phonemic items in working memory provide bottom-up input to unitized list chunks that group together sequences of items of variable length.
Abstract: How do listeners integrate temporally distributed phonemic information into coherent representations of syllables and words? For example, increasing the silence interval between the words "gray chip" may result in the percept "great chip," whereas increasing the duration of fricative noise in "chip" may alter the percept to "great ship" (B H Repp, A M Liberman, T Eccardt, & D Pesetsky, 1978) The ARTWORD neural model quantitatively simulates such context-sensitive speech data In ARTWORD, sequentially stored phonemic items in working memory provide bottom-up input to unitized list chunks that group together sequences of items of variable length The list chunks compete with each other The winning groupings feed back to establish a resonance which temporarily boosts the activation levels of selected items and chunks, thereby creating an emergent conscious percept whose properties match such data

Journal Article•DOI•
TL;DR: The performance of fallible counters is investigated in the context of pacemaker-counter models of interval timing, finding predictions consistent with performance in temporal discrimination and production and with channel capacities for identification of unidimensional stimuli.
Abstract: The performance of fallible counters is investigated in the context of pacemaker-counter models of interval timing. Failure to reliably transmit signals from one stage of a counter to the next generates periodicity in mean and variance of counts registered, with means power functions of input and standard deviations approximately proportional to the means (Weber's law). The transition diagrams and matrices of the counter are self-similar: Their eigenvalues have a fractal form and closely approximate Julia sets. The distributions of counts registered and of hitting times approximate Weibull densities, which provide the foundation for a signal-detection model of discrimination. Different schemes for weighting the values of each stage may be established by conditioning. As higher order stages of a cascade come on-line the veridicality of lower order stages degrades, leading to scale-invariance in error. The capacity of a counter is more likely to be limited by fallible transmission between stages than by a paucity of stages. Probabilities of successful transmission between stages of a binary counter around 0.98 yield predictions consistent with performance in temporal discrimination and production and with channel capacities for identification of unidimensional stimuli.

Journal Article•DOI•
TL;DR: The tensor product model (TPM), a connectionist model of memory and learning, is used to describe the process of group impression formation and change, emphasizing the structured and contextualized nature ofgroup impressions and the dynamic evolution of group impressions over time.
Abstract: Group impressions are dynamic configurations. The tensor product model (TPM), a connectionist model of memory and learning, is used to describe the process of group impression formation and change, emphasizing the structured and contextualized nature of group impressions and the dynamic evolution of group impressions over time. TPM is first shown to be consistent with algebraic models of social judgment (the weighted averaging model; N. Anderson, 1981) and exemplar-based social category learning (the context model; E. R. Smith & M. A. Zarate, 1992), providing a theoretical reduction of the algebraic models to the present connectionist framework. TPM is then shown to describe a common process that underlies both formation and change of group impressions despite the often-made assumption that they constitute different psychological processes. In particular, various time-dependent properties of both group impression formation (e.g., time variability, response dependency, and order effects in impression judgments) and change (e.g., stereotype change and group accentuation) are explained, demonstrating a hidden unity beneath the diverse array of empirical findings. Implications of the model for conceptualizing stereotype formation and change are discussed.

Journal Article•DOI•
TL;DR: A cognitive process model is developed that predicts the 3 major symbolic comparison response time effects (distance, end, and semantic congruity) found in the results of the linear syllogistic reasoning task.
Abstract: A cognitive process model is developed that predicts the 3 major symbolic comparison response time effects (distance, end, and semantic congruity) found in the results of the linear syllogistic reasoning task. The model includes a simple connectionist learning component and dual evidence accumulation decision-making components. It assumes that responses can be based either on information concerning the positional difference between the presented stimulus items or on information concerning the endpoint status of each of these items. The model provides an excellent quantitative account of the mean correct response times obtained from 16 participants who performed paired comparisons of 6 ordered symbolic stimuli (3-letter names).

Journal Article•DOI•
TL;DR: It is argued in the present article that decomposing over- and underconfidence into true and artifactual components is inappropriate because the mistake stems from giving primacy to ambiguously defined model constructions (true judgments) over observed data.
Abstract: I. Erev, T. S. Wallsten, and D. V. Budescu (1994) showed that the same probability judgment data can reveal both apparent overconfidence and underconfidence, depending on how the data are analyzed. To explain this seeming paradox, I. Erev et al. proposed a general model of judgment in which overt responses are related to underlying "true judgments" that are perturbed by error. A central conclusion of their work is that observed over- and underconfidence can be split into two components: (a) "true" over- and underconfidence and (b) "artifactual" over- and underconfidence due to error in judgment. It is argued in the present article that decomposing over- and underconfidence into true and artifactual components is inappropriate. The mistake stems from giving primacy to ambiguously defined model constructions (true judgments) over observed data.

Journal Article•DOI•
Gregory Francis1•
TL;DR: 3 quantitative methods of accounting for the U-shaped masking effect are described and 4 previously published mathematical models of masking are analyzed, indicating that it is a robust characteristic of a large class of neurally plausible systems.
Abstract: In metacontrast masking, the effect of a visual mask stimulus on the perceptual strength of a target stimulus varies with the stimulus-onset asynchrony (SOA) between them. As SOA increases, the target percept first becomes weaker, bottoms out at an intermediate SOA, and then increases for still larger SOAs. As a result, a plot of target percept strength against SOA produces a U-shaped masking curve. Theories have proposed special mechanisms to account for this curve, but new mathematical analyses indicate that it is a robust characteristic of a large class of neurally plausible systems. The author describes 3 quantitative methods of accounting for the U-shaped masking effect and analyzes 4 previously published mathematical models of masking. The models produce the masking curve through mask blocking, whereby a strong internal representation of the target blocks the mask's effects.

Journal Article•DOI•
TL;DR: Experiments on hyperacuities for detecting relative motion and binocular disparity among separated image features showed that spatial positions are visually specified by the surrounding optical pattern rather than by retinal coordinates, minimally affected by random image perturbations produced by 3-D object motions.
Abstract: Vision is based on spatial correspondences between physically different structures--in environment, retina, brain, and perception. An examination of the correspondence between environmental surfaces and their retinal images showed that this consists of 2-dimensional 2nd-order differential structure (effectively 4th-order) associated with local surface shape, suggesting that this might be a primitive form of spatial information. Next, experiments on hyperacuities for detecting relative motion and binocular disparity among separated image features showed that spatial positions are visually specified by the surrounding optical pattern rather than by retinal coordinates, minimally affected by random image perturbations produced by 3-D object motions. Retinal image space, therefore, involves 4th-order differential structure. This primitive spatial structure constitutes information about local surface shape.

Journal Article•DOI•
TL;DR: It is demonstrated that a wide-ranging, detailed, and parsimonious account of the distribution of handedness is obtained when left-handedness is assumed to be associated recessively, and with low penetrance, with genetic variation located on the X chromosome.
Abstract: A unified, quantitative model for sex, twin, parent, and grandparent influences on handedness is presented. Recent research modeling the evolutionary development of genetic mechanisms for the transmission of handedness on the basis of genotype fitness has appeared to lead to the conclusion that a handedness gene cannot be located on the sex chromosomes. It is shown in this article, however, that this conclusion is not of general validity. The sex-chromosomes hypothesis is developed further, and it is demonstrated that a wide-ranging, detailed, and parsimonious account of the distribution of handedness is obtained when left-handedness is assumed to be associated recessively, and with low penetrance, with genetic variation located on the X chromosome.

Journal Article•DOI•
TL;DR: Vicente and Wang as discussed by the authors proposed a "constraint attunement hypothesis" to explain the large effects of domain expertise on memory recall observed in a number of task domains.
Abstract: K. J. Vicente and J. H. Wang (1998) proposed a "constraint attunement hypothesis" to explain the large effects of domain expertise on memory recall observed in a number of task domains. They claimed to have found serious defects in alternative explanations of these effects, which their theory overcomes. Reexamination of the evidence shows that their theory is not novel but has been anticipated by those they criticized and that other current published theories of the phenomena do not have the defects that Vicente and Wang attributed to them. Vicente and Wang's views reflect underlying differences about (a) emphasis on performance versus process in psychology and (b) how theories and empirical knowledge interact and progress with the development of a science.