scispace - formally typeset
Search or ask a question

Showing papers in "Psychological Review in 2005"


Journal ArticleDOI
TL;DR: The career of metaphor hypothesis offers a unified theoretical framework that can resolve the debate between comparison and categorization models of metaphor and suggests that whether metaphors are processed directly or indirectly and whether they operate at the level of individual concepts or entire conceptual domains, will depend both on their degree of conventionality and on their linguistic form.
Abstract: A central question in metaphor research is how metaphors establish mappings between concepts from different domains. The authors propose an evolutionary path based on structure-mapping theory. This hypothesis--the career of metaphor--postulates a shift in mode of mapping from comparison to categorization as metaphors are conventionalized. Moreover, as demonstrated by 3 experiments, this processing shift is reflected in the very language that people use to make figurative assertions. The career of metaphor hypothesis offers a unified theoretical framework that can resolve the debate between comparison and categorization models of metaphor. This account further suggests that whether metaphors are processed directly or indirectly, and whether they operate at the level of individual concepts or entire conceptual domains, will depend both on their degree of conventionality and on their linguistic form.

944 citations


Journal ArticleDOI
TL;DR: An advanced version of SWIFT is presented that integrates properties of the oculomotor system and effects of word recognition to explain many of the experimental phenomena faced in reading research and an analysis of the transition from parallel to serial processing.
Abstract: Mathematical models have become an important tool for understanding the control of eye movements during reading. Main goals of the development of the SWIFT model (R. Engbert, A. Longtin, & R. Kliegl, 2002) were to investigate the possibility of spatially distributed processing and to implement a general mechanism for all types of eye movements observed in reading experiments. The authors present an advanced version of SWIFT that integrates properties of the oculomotor system and effects of word recognition to explain many of the experimental phenomena faced in reading research. They propose new procedures for the estimation of model parameters and for the test of the model's performance. They also present a mathematical analysis of the dynamics of the SWIFT model. Finally, within this framework, they present an analysis of the transition from parallel to serial processing.

920 citations


Journal ArticleDOI
TL;DR: It is argued that human desire involves conscious cognition that has strong affective connotation and is potentially involved in the determination of appetitive behavior rather than being epiphenomenal to it and provides a coherent account of existing data and suggests new directions for research and treatment.
Abstract: The authors argue that human desire involves conscious cognition that has strong affective connotation and is potentially involved in the determination of appetitive behavior rather than being epiphenomenal to it. Intrusive thoughts about appetitive targets are triggered automatically by external or physiological cues and by cognitive associates. When intrusions elicit significant pleasure or relief, cognitive elaboration usually ensues. Elaboration competes with concurrent cognitive tasks through retrieval of target-related information and its retention in working memory. Sensory images are especially important products of intrusion and elaboration because they simulate the sensory and emotional qualities of target acquisition. Desire images are momentarily rewarding but amplify awareness of somatic and emotional deficits. Effects of desires on behavior are moderated by competing incentives, target availability, and skills. The theory provides a coherent account of existing data and suggests new directions for research and treatment.

753 citations


Journal ArticleDOI
TL;DR: Adopting a life stress perspective, the authors introduce 3 major themes that resolve the inconsistencies in the current literature and extrapolate these themes to develop a preliminary framework for evaluating competing explanatory models and to guide research on life stress and the recurrence of depression.
Abstract: Major depression is frequently characterized by recurrent episodes over the life course. First lifetime episodes of depression, however, are typically more strongly associated with major life stress than are successive recurrences. A key theoretical issue involves how the role of major life stress changes from an initial episode over subsequent recurrences. The primary conceptual framework for research on life stress and recurrence of depression is the "kindling" hypothesis (R. M. Post, 1992). Despite the strengths of the kindling hypothesis, a review of the research literature reveals inconsistencies and confusion about life stress and its implications for the recurrence of depression. Adopting a life stress perspective, the authors introduce 3 major themes that resolve the inconsistencies in the current literature. They integrate these themes and extrapolate the ideas with available data to develop a preliminary framework for evaluating competing explanatory models and to guide research on life stress and the recurrence of depression.

631 citations


Journal ArticleDOI
TL;DR: NTVA provides a mathematical framework to unify the 2 fields of research--formulas bridging cognition and neurophysiology.
Abstract: A neural theory of visual attention (NTVA) is presented NTVA is a neural interpretation of C Bundesen's (1990) theory of visual attention (TVA) In NTVA, visual processing capacity is distributed across stimuli by dynamic remapping of receptive fields of cortical cells such that more processing resources (cells) are devoted to behaviorally important objects than to less important ones By use of the same basic equations used in TVA, NTVA accounts for a wide range of known attentional effects in human performance (reaction times and error rates) and a wide range of effects observed in firing rates of single cells in the primate visual system NTVA provides a mathematical framework to unify the 2 fields of research--formulas bridging cognition and neurophysiology

543 citations


Journal ArticleDOI
TL;DR: The authors conclude that many--but not all--aspects of attention capture apply to inattentional blindness but that these 2 classes of phenomena remain importantly distinct.
Abstract: This article reports a theoretical and experimental attempt to relate and contrast 2 traditionally separate research programs: inattentional blindness and attention capture. Inattentional blindness refers to failures to notice unexpected objects and events when attention is otherwise engaged. Attention capture research has traditionally used implicit indices (e.g., response times) to investigate automatic shifts of attention. Because attention capture usually measures performance whereas inattentional blindness measures awareness, the 2 fields have existed side by side with no shared theoretical framework. Here, the authors propose a theoretical unification, adapting several important effects from the attention capture literature to the context of sustained inattentional blindness. Although some stimulus properties can influence noticing of unexpected objects, the most influential factor affecting noticing is a person’s own attentional goals. The authors conclude that many—but not all—aspects of attention capture apply to inattentional blindness but that these 2 classes of phenomena remain importantly distinct.

514 citations


Journal ArticleDOI
TL;DR: This model of reinforcement learning among cognitive strategies (RELACS) captures the 3 deviations, the learning curves, and the effect of information on uncertainty avoidance and outperforms other models in fitting the data and in predicting behavior in other experiments.
Abstract: Analysis of binary choice behavior in iterated tasks with immediate feedback reveals robust deviations from maximization that can be described as indications of 3 effects: (a) a payoff variability effect, in which high payoff variability seems to move choice behavior toward random choice; (b) underweighting of rare events, in which alternatives that yield the best payoffs most of the time are attractive even when they are associated with a lower expected return; and (c) loss aversion, in which alternatives that minimize the probability of losses can be more attractive than those that maximize expected payoffs. The results are closer to probability matching than to maximization. Best approximation is provided with a model of reinforcement learning among cognitive strategies (RELACS). This model captures the 3 deviations, the learning curves, and the effect of information on uncertainty avoidance. It outperforms other models in fitting the data and in predicting behavior in other experiments.

446 citations


Journal ArticleDOI
TL;DR: A formal model that integrates 3 developmental processes: stochastic-contextual processes, person-environment transactions, and developmental constancies is described, indicating that this model makes novel predictions about the way in which test-retest correlations are structured across a wide range of ages and test- retest intervals.
Abstract: In contemporary psychology there is debate over whether individual differences in psychological constructs are stable over extended periods of time. The authors argue that it is impossible to resolve such debates unless researchers focus on patterns of stability and the developmental mechanisms that may give rise to them. To facilitate this shift in emphasis, they describe a formal model that integrates 3 developmental processes: stochastic-contextual processes, person-environment transactions, and developmental constancies. The theoretical and mathematical analyses indicate that this model makes novel predictions about the way in which test-retest correlations are structured across a wide range of ages and test-retest intervals. The authors illustrate the utility of the model by comparing its predictions against meta-analytic data on Neuroticism. The discussion emphasizes the value of focusing on patterns of continuity, not only as phenomena to be explained but as data capable of clarifying the developmental processes underlying stability and change for a variety of psychological constructs.

399 citations


Journal ArticleDOI
TL;DR: The implemented model simulates the time course of graphic, phonological, and semantic priming effects, including immediate graphic facilitation followed by graphic inhibition with simultaneous phonological facilitation, a pattern unique to the Chinese writing system.
Abstract: The authors examine the implications of research on Chinese for theories of reading and propose the lexical constituency model as a general framework for word reading across writing systems. Word identities are defined by 3 interlinked constituents (orthographic, phonological, and semantic). The implemented model simulates the time course of graphic, phonological, and semantic priming effects, including immediate graphic facilitation followed by graphic inhibition with simultaneous phonological facilitation, a pattern unique to the Chinese writing system. Pseudocharacter primes produced only facilitation, supporting the model's assumption that lexical thresholds determine phonological and semantic, but not graphic, effects. More generally, both universal reading processes and writing system constraints exist. Although phonology is universal, its activation process depends on how the writing system structures graphic units.

398 citations


Journal ArticleDOI
TL;DR: An original evaluation of 9 group decision rules based on their adaptive success in a simulated test bed environment supports the popularity of majority and plurality rules in truth-seeking group decisions.
Abstract: How should groups make decisions? The authors provide an original evaluation of 9 group decision rules based on their adaptive success in a simulated test bed environment. When the adaptive success standard is applied, the majority and plurality rules fare quite well, performing at levels comparable to much more resource-demanding rules such as an individual judgment averaging rule. The plurality rule matches the computationally demanding Condorcet majority winner that is standard in evaluations of preferential choice. The authors also test the results from their theoretical analysis in a behavioral study of nominal human group decisions, and the essential findings are confirmed empirically. The conclusions of the present analysis support the popularity of majority and plurality rules in truth-seeking group decisions.

379 citations


Journal ArticleDOI
TL;DR: The authors propose an alternative relative judgment model (RJM) in which the elemental perceptual units are representations of the differences between current and previous stimuli that are used, together with the previous feedback, to respond.
Abstract: In unidimensional absolute identification tasks, participants identify stimuli that vary along a single dimension. Performance is surprisingly poor compared with discrimination of the same stimuli. Existing models assume that identification is achieved using long-term representations of absolute magnitudes. The authors propose an alternative relative judgment model (RJM) in which the elemental perceptual units are representations of the differences between current and previous stimuli. These differences are used, together with the previous feedback, to respond. Without using long-term representations of absolute magnitudes, the RJM accounts for (a) information transmission limits, (b) bowed serial position effects, and (c) sequential effects, where responses are biased toward immediately preceding stimuli but away from more distant stimuli (assimilation and contrast).

Journal ArticleDOI
TL;DR: The authors argue for an integrated model of skill learning that takes into account both implicit and explicit processes, and argue for a bottom-up approach (first learning implicit knowledge and then explicit knowledge) in the integrated model.
Abstract: This article explicates the interaction between implicit and explicit processes in skill learning, in contrast to the tendency of researchers to study each type in isolation. It highlights various effects of the interaction on learning (including synergy effects). The authors argue for an integrated model of skill learning that takes into account both implicit and explicit processes. Moreover, they argue for a bottom-up approach (first learning implicit knowledge and then explicit knowledge) in the integrated model. A variety of qualitative data can be accounted for by the approach. A computational model, CLARION, is then used to simulate a range of quantitative data. The results demonstrate the plausibility of the model, which provides a new perspective on skill learning.

Journal ArticleDOI
TL;DR: Simulations of the recognition heuristic demonstrate that forgetting can boost accuracy by increasing the chances that only 1 object is recognized, and that loss of information aids inference heuristics that exploit mnemonic information.
Abstract: Some theorists, ranging from W. James (1890) to contemporary psychologists, have argued that forgetting is the key to proper functioning of memory. The authors elaborate on the notion of beneficial forgetting by proposing that loss of information aids inference heuristics that exploit mnemonic information. To this end, the authors bring together 2 research programs that take an ecological approach to studying cognition. Specifically, they implement fast and frugal heuristics within the ACT-R cognitive architecture. Simulations of the recognition heuristic, which relies on systematic failures of recognition to infer which of 2 objects scores higher on a criterion value, demonstrate that forgetting can boost accuracy by increasing the chances that only 1 object is recognized. Simulations of the fluency heuristic, which arrives at the same inference on the basis of the speed with which objects are recognized, indicate that forgetting aids the discrimination between the objects' recognition speeds.

Journal ArticleDOI
TL;DR: Computer simulations and mathematical analyses demonstrate the functional and empirical adequacy of selective reweighting as a perceptual learning mechanism.
Abstract: The mechanisms of perceptual learning are analyzed theoretically, probed in an orientation-discrimination experiment involving a novel nonstationary context manipulation, and instantiated in a detailed computational model. Two hypotheses are examined: modification of early cortical representations versus task-specific selective reweighting. Representation modification seems neither functionally necessary nor implied by the available psychophysical and physiological evidence. Computer simulations and mathematical analyses demonstrate the functional and empirical adequacy of selective reweighting as a perceptual learning mechanism. The stimulus images are processed by standard orientation- and frequency-tuned representational units, divisively normalized. Learning occurs only in the "read-out" connections to a decision unit; the stimulus representations never change. An incremental Hebbian rule tracks the task-dependent predictive value of each unit, thereby improving the signal-to-noise ratio of their weighted combination. Each abrupt change in the environmental statistics induces a switch cost in the learning curves as the system temporarily works with suboptimal weights.

Journal ArticleDOI
TL;DR: The authors argue that cultural transmission and formation consist primarily not in shared rules or norms but in complex distributions of causally connected representations across minds interacting with the environment, and that cultural stability and diversity of these representations often derive from rich, biologically prepared mental mechanisms that limit variation to readily transmissible psychological forms.
Abstract: This article describes cross-cultural research on the relation between how people conceptualize nature and how they act in it. Mental models of nature differ dramatically among populations living in the same area and engaged in similar activities. This has novel implications for environmental decision making and management, including commons problems. The research offers a distinct perspective on cultural modeling and a unified approach to studies of culture and cognition. The authors argue that cultural transmission and formation consist primarily not in shared rules or norms but in complex distributions of causally connected representations across minds interacting with the environment. The cultural stability and diversity of these representations often derive from rich, biologically prepared mental mechanisms that limit variation to readily transmissible psychological forms. This framework addresses several methodological issues, such as limitations on conceiving culture to be a well-defined system, bounded entity, independent variable, or an internalized component of minds.

Journal ArticleDOI
TL;DR: Simulation studies show that the entorhinal cortex supports a gradually changing representation of temporal context and the hippocampus proper enables retrieval of these contextual states and constitute a first step toward a unified computational theory of MTL function that integrates neurophysiological, neuropsychological, and cognitive findings.
Abstract: The medial temporal lobe (MTL) has been studied extensively at all levels of analysis, yet its function remains unclear. Theory regarding the cognitive function of the MTL has centered along 3 themes. Different authors have emphasized the role of the MTL in episodic recall, spatial navigation, or relational memory. Starting with the temporal context model (M. W. Howard & M. J. Kahana, 2002a), a distributed memory model that has been applied to benchmark data from episodic recall tasks, the authors propose that the entorhinal cortex supports a gradually changing representation of temporal context and the hippocampus proper enables retrieval of these contextual states. Simulation studies show this hypothesis explains the firing of place cells in the entorhinal cortex and the behavioral effects of hippocampal lesion in relational memory tasks. These results constitute a first step toward a unified computational theory of MTL function that integrates neurophysiological, neuropsychological, and cognitive findings.

Journal ArticleDOI
TL;DR: This mechanism suggests an alternative explanation of several regularities in impression formation, including a negativity bias in impressions of outgroup members, systematic differences in performance evaluations, and more positive evaluations of proximate others.
Abstract: Individuals are typically more likely to continue to interact with people if they have a positive impression of them. This article shows how this sequential sampling feature of impression formation can explain several biases in impression formation. The underlying mechanism is the sample bias generated when the probability of interaction depends on current impressions. Because negative experiences decrease the probability of interaction, negative initial impressions are more stable than positive impressions. Negative initial impressions, however, are more likely to change for individuals who are frequently exposed to others. As a result, systematic differences in interaction patterns, due to social similarity or proximity, will produce systematic differences in impressions. This mechanism suggests an alternative explanation of several regularities in impression formation, including a negativity bias in impressions of outgroup members, systematic differences in performance evaluations, and more positive evaluations of proximate others.

Journal ArticleDOI
TL;DR: Nine simulations and behavioral experiments tested the hypothesis that generalized expectations about how solid and nonsolid things are named arise from the correlations characterizing early learned noun categories, and formed generalized expectations that match children's performances in the novel noun generalization task in the very different languages of English and Japanese.
Abstract: In the novel noun generalization task, 2 1/2-year-old children display generalized expectations about how solid and nonsolid things are named, extending names for never-before-encountered solids by shape and for never-before-encountered nonsolids by material This distinction between solids and nonsolids has been interpreted in terms of an ontological distinction between objects and substances Nine simulations and behavioral experiments tested the hypothesis that these expectations arise from the correlations characterizing early learned noun categories In the simulation studies, connectionist networks were trained on noun vocabularies modeled after those of children These networks formed generalized expectations about solids and nonsolids that match children's performances in the novel noun generalization task in the very different languages of English and Japanese The simulations also generate new predictions supported by new experiments with children Implications are discussed in terms of children's development of distinctions between kinds of categories and in terms of the nature of this knowledge

Journal ArticleDOI
TL;DR: An account of number agreement is expanded whose tenets are that pronouns acquire number lexically, whereas verbs acquire it syntactically but with similar contributions from number meaning and from the number morphology of agreement controllers.
Abstract: Grammatical agreement flags the parts of sentences that belong together regardless of whether the parts appear together. In English, the major agreement controller is the sentence subject, the major agreement targets are verbs and pronouns, and the major agreement category is number. The authors expand an account of number agreement whose tenets are that pronouns acquire number lexically, whereas verbs acquire it syntactically but with similar contributions from number meaning and from the number morphology of agreement controllers. These tenets were instantiated in a model using existing verb agreement data. The model was then fit to a new, more extensive set of verb data and tested with a parallel set of pronoun data. The theory was supported by the model's outcomes. The results have implications for the integration of words and structures, for the workings of agreement categories, and for the nature of the transition from thought to language.

Journal ArticleDOI
TL;DR: The authors outline in 11 propositions a framework for a new approach that is more attentive to the purposes that people use morality to achieve, and introduce a more pragmatic approach.
Abstract: In this article, the authors evaluate L. Kohlberg's (1984) cognitive- developmental approach to morality, find it wanting, and introduce a more pragmatic approach. They review research designed to evaluate Kohlberg's model, describe how they revised the model to accommodate discrepant findings, and explain why they concluded that it is poorly equipped to account for the ways in which people make moral decisions in their everyday lives. The authors outline in 11 propositions a framework for a new approach that is more attentive to the purposes that people use morality to achieve. People make moral judgments and engage in moral behaviors to induce themselves and others to uphold systems of cooperative exchange that help them achieve their goals and advance their interests.

Journal ArticleDOI
TL;DR: The authors show that a ballistic (deterministic within-trial) model using a simplified version of M. McClelland's (2001) nonlinear accumulation process with between-trial variability in accumulation rate and starting point is capable of accounting for the benchmark behavioral phenomena.
Abstract: Almost all models of response time (RT) use a stochastic accumulation process. To account for the benchmark RT phenomena, researchers have found it necessary to include between-trial variability in the starting point and/or the rate of accumulation, both in linear (R. Ratcliff & J. N. Rouder, 1998) and nonlinear (M. Usher & J. L. McClelland, 2001) models. The authors show that a ballistic (deterministic within-trial) model using a simplified version of M. Usher and J. L. McClelland's (2001) nonlinear accumulation process with between-trial variability in accumulation rate and starting point is capable of accounting for the benchmark behavioral phenomena. The authors successfully fit their model to R. Ratcliff and J. N. Rouder's (1998) data, which exhibit many of the benchmark phenomena.

Journal ArticleDOI
TL;DR: This paper argues that phenomenal states play an essential role in permitting interactions among supramodular response systems--agentic, independent, multimodal, information-processing structures defined by their concerns--and that these systems would be encapsulated and incapable of collectively influencing skeletomotor action.
Abstract: Discovering the function of phenomenal states remains a formidable scientific challenge. Research on consciously penetrable conflicts (e.g., “pain-for-gain” scenarios) and impenetrable conflicts (as in the pupillary reflex, ventriloquism, and the McGurk effect [H. McGurk & J. MacDonald, 1976]) reveals that these states integrate diverse kinds of information to yield adaptive action. Supramodular interaction theory proposes that phenomenal states play an essential role in permitting interactions among supramodular response systems—agentic, independent, multimodal, information-processing structures defined by their concerns (e.g., instrumental action vs. certain bodily needs). Unlike unconscious processes (e.g., pupillary reflex), these processes may conflict with skeletal muscle plans, as described by the principle of parallel responses into skeletal muscle (PRISM). Without phenomenal states, these systems would be encapsulated and incapable of collectively influencing skeletomotor action.

Journal ArticleDOI
TL;DR: The authors show that for closed contours, segments of negative curvature literally carry greater information than do corresponding regions of positive curvature (i.e., concave segments), and extend Attneave's claim to incorporate the role of sign of curvature, not just magnitude of curvatures.
Abstract: F. Attneave (1954) famously suggested that information along visual contours is concentrated in regions of high magnitude of curvature, rather than being distributed uniformly along the contour. Here the authors give a formal derivation of this claim, yielding an exact expression for information, in C. Shannon's (1948) sense, as a function of contour curvature. Moreover, they extend Attneave's claim to incorporate the role of sign of curvature, not just magnitude of curvature. In particular, the authors show that for closed contours, such as object boundaries, segments of negative curvature (i.e., concave segments) literally carry greater information than do corresponding regions of positive curvature (i.e., convex segments). The psychological validity of this informational analysis is supported by a host of empirical findings demonstrating the asymmetric way in which the visual system treats regions of positive and negative curvature.

Journal ArticleDOI
TL;DR: This article models the cognitive processes underlying learning and sequential choice in a risk-taking task for the purposes of understanding how they occur in this moderately complex environment and how behavior in it relates to self-reported real-world risk taking.
Abstract: This article models the cognitive processes underlying learning and sequential choice in a risk-taking task for the purposes of understanding how they occur in this moderately complex environment and how behavior in it relates to self-reported real-world risk taking. The best stochastic model assumes that participants incorrectly treat outcome probabilities as stationary, update probabilities in a Bayesian fashion, evaluate choice policies prior to rather than during responding, and maintain constant response sensitivity. The model parameter associated with subjective value of gains correlates well with external risk taking. Both the overall approach, which can be expanded as the basic paradigm is varied, and the specific results provide direction for theories of risky choice and for understanding risk taking as a public health problem.

Journal ArticleDOI
TL;DR: A novel model-based theory of relational reasoning based on 5 principles that describes computer implementations of the theory and presents experimental results corroborating its main principle.
Abstract: Inferences about spatial, temporal, and other relations are ubiquitous. This article presents a novel model-based theory of such reasoning. The theory depends on 5 principles. (a) The structure of mental models is iconic as far as possible. (b) The logical consequences of relations emerge from models constructed from the meanings of the relations and from knowledge. (c) Individuals tend to construct only a single, typical model. (d) They spontaneously develop their own strategies for relational reasoning. (e) Regardless of strategy, the difficulty of an inference depends on the process of integration of the information from separate premises, the number of entities that have to be integrated to form a model, and the depth of the relation. The article describes computer implementations of the theory and presents experimental results corroborating its main principle.

Journal ArticleDOI
TL;DR: It is concluded that Bayesian diagnosticity is normatively flawed and empirically unjustified.
Abstract: Several norms for how people should assess a question's usefulness have been proposed, notably Bayesian diagnosticity, information gain (mutual information), Kullback-Liebler distance, probability gain (error minimization), and impact (absolute change). Several probabilistic models of previous experiments on categorization, covariation assessment, medical diagnosis, and the selection task are shown to not discriminate among these norms as descriptive models of human intuitions and behavior. Computational optimization found situations in which information gain, probability gain, and impact strongly contradict Bayesian diagnosticity. In these situations, diagnosticity's claims are normatively inferior. Results of a new experiment strongly contradict the predictions of Bayesian diagnosticity. Normative theoretical concerns also argue against use of diagnosticity. It is concluded that Bayesian diagnosticity is normatively flawed and empirically unjustified.

Journal ArticleDOI
TL;DR: A new computational model is presented that accounts for the empirical trends without changing decision weights, values, or combination rules and retains stable evaluations across methods, and makes novel predictions regarding response distributions and response times.
Abstract: Preference orderings among a set of options may depend on the elicitation method (e.g., choice or pricing); these preference reversals challenge traditional decision theories. Previous attempts to explain these reversals have relied on allowing utility of the options to change across elicitation methods by changing the decision weights, the attribute values, or the combination of this information--still, no theory has successfully accounted for all the phenomena. In this article, the authors present a new computational model that accounts for the empirical trends without changing decision weights, values, or combination rules. Rather, the current model specifies a dynamic evaluation and response process that correctly predicts preference orderings across 6 elicitation methods, retains stable evaluations across methods, and makes novel predictions regarding response distributions and response times.

Journal ArticleDOI
TL;DR: A conceptual and psychometric framework is described for distinguishing whether the latent structure behind manifest categories is category-like or dimension-like, and empirical applications to personality disorders, attitudes toward capital punishment, and stages of cognitive development illustrate the approach.
Abstract: An important, sometimes controversial feature of all psychological phenomena is whether they are categorical or dimensional. A conceptual and psychometric framework is described for distinguishing whether the latent structure behind manifest categories (e.g., psychiatric diagnoses, attitude groups, or stages of development) is category-like or dimension-like. Being dimension-like requires (a) within-category heterogeneity and (b) between-category quantitative differences. Being category-like requires (a) within-category homogeneity and (b) between-category qualitative differences. The relation between this classification and abrupt versus smooth differences is discussed. Hybrid structures are possible. Being category-like is itself a matter of degree; the authors offer a formalized framework to determine this degree. Empirical applications to personality disorders, attitudes toward capital punishment, and stages of cognitive development illustrate the approach.

Journal ArticleDOI
TL;DR: This comment, with the help of a simple example, explains the usefulness of Bayesian inference for psychology.
Abstract: D. Trafimow (2003) presented an analysis of null hypothesis significance testing (NHST) using Bayes's theorem. Among other points, he concluded that NHST is logically invalid, but that logically valid Bayesian analyses are often not possible. The latter conclusion reflects a fundamental misunderstanding of the nature of Bayesian inference. This view needs correction, because Bayesian methods have an important role to play in many psychological problems where standard techniques are inadequate. This comment, with the help of a simple example, explains the usefulness of Bayesian inference for psychology.

Journal ArticleDOI
TL;DR: This article presents SERIF, a new model of eye movement control in reading that integrates an established stochastic model of saccade latencies with a fundamental anatomical constraint on reading: the vertically split fovea and the initial projection of information in either visual field to the contralateral hemisphere.
Abstract: This article presents SERIF, a new model of eye movement control in reading that integrates an established stochastic model of saccade latencies (LATER; R. H. S. Carpenter, 1981) with a fundamental anatomical constraint on reading: the vertically split fovea and the initial projection of information in either visual field to the contralateral hemisphere. The novel features of the model are its simulation of saccade latencies as a race between two stochastic rise-to-threshold LATER units and its probabilistic selection of the target for the next saccade. The model generates simulated eye movement behavior that exhibits important characteristics of actual eye movements made during reading; specifically, simulations produce realistic saccade target distributions and replicate a number of critical reading phenomena, including the effects of word frequency on fixation durations, the inverted optimal viewing position effect, the trade-off between first and second fixation durations of refixated words, and the dependence of parafoveal preview benefit on eccentricity.