scispace - formally typeset
Search or ask a question

Showing papers in "Psychological Review in 2008"


Journal ArticleDOI
TL;DR: It is argued that stereotype threat disrupts performance via 3 distinct, yet interrelated, mechanisms: a physiological stress response that directly impairs prefrontal processing, a tendency to actively monitor performance, and efforts to suppress negative thoughts and emotions in the service of self-regulation.
Abstract: Research showing that activation of negative stereotypes can impair the performance of stigmatized individuals on a wide variety of tasks has proliferated. However, a complete understanding of the processes underlying these stereotype threat effects on behavior is still lacking. The authors examine stereotype threat in the context of research on stress arousal, vigilance, working memory, and self-regulation to develop a process model of how negative stereotypes impair performance on cognitive and social tasks that require controlled processing, as well as sensorimotor tasks that require automatic processing. The authors argue that stereotype threat disrupts performance via 3 distinct, yet interrelated, mechanisms: (a) a physiological stress response that directly impairs prefrontal processing, (b) a tendency to actively monitor performance, and (c) efforts to suppress negative thoughts and emotions in the service of self-regulation. These mechanisms combine to consume executive resources needed to perform well on cognitive and social tasks. The active monitoring mechanism disrupts performance on sensorimotor tasks directly. Empirical evidence for these assertions is reviewed, and implications for interventions designed to alleviate stereotype threat are discussed.

1,308 citations


Journal ArticleDOI
TL;DR: This paper presented a reconciliation of three distinct ways in which the research literature has defined overconfidence: (a) overestimation of one's actual performance, (b) overplacement of the performance relative to others, and (c) excessive precision in one's beliefs.
Abstract: The authors present a reconciliation of 3 distinct ways in which the research literature has defined overconfidence: (a) overestimation of one's actual performance, (b) overplacement of one's performance relative to others, and (c) excessive precision in one's beliefs. Experimental evidence shows that reversals of the first 2 (apparent underconfidence), when they occur, tend to be on different types of tasks. On difficult tasks, people overestimate their actual performances but also mistakenly believe that they are worse than others; on easy tasks, people underestimate their actual performances but mistakenly believe they are better than others. The authors offer a straightforward theory that can explain these inconsistencies. Overprecision appears to be more persistent than either of the other 2 types of overconfidence, but its presence reduces the magnitude of both overestimation and overplacement.

1,282 citations


Journal ArticleDOI
TL;DR: A model is proposed that integrates affective, biological, and cognitive factors as vulnerabilities to depression that heighten girls' rates of depression beginning in adolescence and account for the gender difference in depression.
Abstract: In adulthood, twice as many women as men are depressed, a pattern that holds in most nations. In childhood, girls are no more depressed than boys, but more girls than boys are depressed by ages 13 to 15. Although many influences on this emergent gender difference in depression have been proposed, a truly integrated, developmental model is lacking. The authors propose a model that integrates affective (emotional reactivity), biological (genetic vulnerability, pubertal hormones, pubertal timing and development) and cognitive (cognitive style, objectified body consciousness, rumination) factors as vulnerabilities to depression that, in interaction with negative life events, heighten girls' rates of depression beginning in adolescence and account for the gender difference in depression.

948 citations


Journal ArticleDOI
TL;DR: This article uses demand curves to map how reinforcer consumption changes with changes in the "price" different ratio schedules impose and uses an exponential equation to scale the strength or essential value of a reinforcer, independent of the scalar dimensions of the reinforcer.
Abstract: The strength of a rat's eating reflex correlates with hunger level when strength is measured by the response frequency that precedes eating (B. F. Skinner, 1932a, 1932b). On the basis of this finding, Skinner argued response frequency could index reflex strength. Subsequent work documented difficulties with this notion because responding was affected not only by the strengthening properties of the reinforcer but also by the rate-shaping effects of the schedule. This article obviates this problem by measuring strength via methods from behavioral economics. This approach uses demand curves to map how reinforcer consumption changes with changes in the "price" different ratio schedules impose. An exponential equation is used to model these demand curves. The value of this exponential's rate constant is used to scale the strength or essential value of a reinforcer, independent of the scalar dimensions of the reinforcer. Essential value determines the consumption level to be expected at particular prices and the response level that will occur to support that consumption. This approach permits comparing reinforcers that differ in kind, contributing toward the goal of scaling reinforcer value.

669 citations


Journal ArticleDOI
TL;DR: The authors propose the idea of threaded cognition, an integrated theory of concurrent multitasking--that is, performing 2 or more tasks at once--that provides explicit predictions of how multitasking behavior can result in interference, or lack thereof, for a given set of tasks.
Abstract: The authors propose the idea of threaded cognition, an integrated theory of concurrent multitasking-that is, performing 2 or more tasks at once. Threaded cognition posits that streams of thought can be represented as threads of processing coordinated by a serial procedural resource and executed across other available resources (e.g., perceptual and motor resources). The theory specifies a parsimonious mechanism that allows for concurrent execution, resource acquisition, and resolution of resource conflicts, without the need for specialized executive processes. By instantiating this mechanism as a computational model, threaded cognition provides explicit predictions of how multitasking behavior can result in interference, or lack thereof, for a given set of tasks. The authors illustrate the theory in model simulations of several representative domains ranging from simple laboratory tasks such as dual-choice tasks to complex real-world domains such as driving and driver distraction.

585 citations


Journal ArticleDOI
TL;DR: Simulations are presented showing that the model can account for key findings: data on the segmentation of continuous speech, word frequency effects, the effects of mispronunciations on word recognition, and evidence on lexical involvement in phonemic decision making.
Abstract: A Bayesian model of continuous speech recognition is presented. It is based on Shortlist (D. Norris, 1994; D. Norris, J. M. McQueen, A. Cutler, & S. Butterfield, 1997) and shares many of its key assumptions: parallel competitive evaluation of multiple lexical hypotheses, phonologically abstract prelexical and lexical representations, a feedforward architecture with no online feedback, and a lexical segmentation algorithm based on the viability of chunks of the input as possible words. Shortlist B is radically different from its predecessor in two respects. First, whereas Shortlist was a connectionist model based on interactive-activation principles, Shortlist B is based on Bayesian principles. Second, the input to Shortlist B is no longer a sequence of discrete phonemes; it is a sequence of multiple phoneme probabilities over 3 time slices per segment, derived from the performance of listeners in a large-scale gating study. Simulations are presented showing that the model can account for key findings: data on the segmentation of continuous speech, word frequency effects, the effects of mispronunciations on word recognition, and evidence on lexical involvement in phonemic decision making. The success of Shortlist B suggests that listeners make optimal Bayesian decisions during spoken-word recognition.

568 citations


Journal ArticleDOI
TL;DR: In the mnemonic model of posttraumatic stress disorder (PTSD), the current memory of a negative event, not the event itself, determines symptoms as discussed by the authors, which is an alternative to the current event-based etiology of PTSD represented in the Diagnostic and Statistical Manual of Mental Disorders (4th ed., text rev.; American Psychiatric Association, 2000).
Abstract: In the mnemonic model of posttraumatic stress disorder (PTSD), the current memory of a negative event, not the event itself, determines symptoms. The model is an alternative to the current event-based etiology of PTSD represented in the Diagnostic and Statistical Manual of Mental Disorders (4th ed., text rev.; American Psychiatric Association, 2000). The model accounts for important and reliable findings that are often inconsistent with the current diagnostic view and that have been neglected by theoretical accounts of the disorder, including the following observations. The diagnosis needs objective information about the trauma and peritraumatic emotions but uses retrospective memory reports that can have substantial biases. Negative events and emotions that do not satisfy the current diagnostic criteria for a trauma can be followed by symptoms that would otherwise qualify for PTSD. Predisposing factors that affect the current memory have large effects on symptoms. The inability-to-recall-an-important-aspect-of-thetrauma symptom does not correlate with other symptoms. Loss or enhancement of the trauma memory affects PTSD symptoms in predictable ways. Special mechanisms that apply only to traumatic memories are not needed, increasing parsimony and the knowledge that can be applied to understanding PTSD.

429 citations


Journal ArticleDOI
TL;DR: A general theory, as well as a working computational model, that explains many findings that are problematic for limited-capacity accounts, including a new experiment showing that the attentional blink can be postponed.
Abstract: What is the time course of visual attention? Attentional blink studies have found that the 2nd of 2 targets is often missed when presented within about 500 ms from the 1st target, resulting in theories about relatively long-lasting capacity limitations or bottlenecks. Earlier studies, however, reported quite the opposite finding: Attention is transiently enhanced, rather than reduced, for several hundreds of milliseconds after a relevant event. The authors present a general theory, as well as a working computational model, that integrate these findings. There is no central role for capacity limitations or bottlenecks. Central is a rapidly responding gating system (or attentional filter) that seeks to enhance relevant and suppress irrelevant information. When items sufficiently match the target description, they elicit transient excitatory feedback activity (a "boost" function), meant to provide access to working memory. However, in the attentional blink task, the distractor after the target is accidentally boosted, resulting in subsequent strong inhibitory feedback response (a "bounce"), which, in effect, closes the gate to working memory. The theory explains many findings that are problematic for limited-capacity accounts, including a new experiment showing that the attentional blink can be postponed.

383 citations


Journal ArticleDOI
TL;DR: Agreement suggests that a fixed-parameter model using spatiochromatic filters and a simulated retina, when driven by the correct visual routines, can be a good general-purpose predictor of human target acquisition behavior.
Abstract: The gaze movements accompanying target localization were examined via human observers and a computational model (target acquisition model [TAM]). Search contexts ranged from fully realistic scenes to toys in a crib to Os and Qs, and manipulations included set size, target eccentricity, and target-distractor similarity. Observers and the model always previewed the same targets and searched identical displays. Behavioral and simulated eye movements were analyzed for acquisition accuracy, efficiency, and target guidance. TAM's behavior generally fell within the behavioral mean's 95% confidence interval for all measures in each experiment/condition. This agreement suggests that a fixed-parameter model using spatiochromatic filters and a simulated retina, when driven by the correct visual routines, can be a good general-purpose predictor of human target acquisition behavior.

371 citations


Journal ArticleDOI
TL;DR: 11 new paradoxes show where prospect theories lead to self-contradiction or systematic false predictions, and are consistent with and were predicted in advance by simple "configural weight" models in which probability-consequence branches are weighted by a function that depends on branch probability and ranks of consequences on discrete branches.
Abstract: During the last 25 years, prospect theory and its successor, cumulative prospect theory, replaced expected utility as the dominant descriptive theories of risky decision making. Although these models account for the original Allais paradoxes, 11 new paradoxes show where prospect theories lead to self-contradiction or systematic false predictions. The new findings are consistent with and, in several cases, were predicted in advance by simple "configural weight" models in which probability-consequence branches are weighted by a function that depends on branch probability and ranks of consequences on discrete branches. Although they have some similarities to later models called "rank-dependent utility," configural weight models do not satisfy coalescing, the assumption that branches leading to the same consequence can be combined by adding their probabilities. Nor do they satisfy cancellation, the "independence" assumption that branches common to both alternatives can be removed. The transfer of attention exchange model, with parameters estimated from previous data, correctly predicts results with all 11 new paradoxes. Apparently, people do not frame choices as prospects but, instead, as trees with branches.

336 citations


Journal ArticleDOI
TL;DR: The authors extend R. Ratcliff's (1981) theory of order relations for encoding of letter positions and show that the model can successfully deal with the presence of effects of letter transposition, letter migration, repeated letters, or subset/superset effects.
Abstract: Recent research has shown that letter identity and letter position are not integral perceptual dimensions (e.g., jugde primes judge in word-recognition experiments). Most comprehensive computational models of visual word recognition (e.g., the interactive activation model, J. L. McClelland & D. E. Rumelhart, 1981, and its successors) assume that the position of each letter within a word is perfectly encoded. Thus, these models are unable to explain the presence of effects of letter transposition (trial-trail), letter migration (beard-bread), repeated letters (moose-mouse), or subset/superset effects (faulty-faculty). The authors extend R. Ratcliff's (1981) theory of order relations for encoding of letter positions and show that the model can successfully deal with these effects. The basic assumption is that letters in the visual stimulus have distributions over positions so that the representation of one letter will extend into adjacent letter positions. To test the model, the authors conducted a series of forced-choice perceptual identification experiments. The overlap model produced very good fits to the empirical data, and even a simplified 2-parameter model was capable of producing fits for 104 observed data points with a correlation coefficient of .91.

Journal ArticleDOI
TL;DR: A model of cognitive control in task switching is developed in which controlled performance depends on the system maintaining access to a code in episodic memory representing the most recently cued task, suggesting that episodic task codes play an important role in keeping the cognitive system focused under a variety of performance constraints.
Abstract: A model of cognitive control in task switching is developed in which controlled performance depends on the system maintaining access to a code in episodic memory representing the most recently cued task. The main constraint on access to the current task code is proactive interference from old task codes. This interference and the mechanisms that contend with it reproduce a wide range of behavioral phenomena when simulated, including well-known task-switching effects, such as latency and error switch costs, and effects on which other theories are silent, such as with-run slowing and within-run error increase. The model generalizes across multiple task-switching procedures, suggesting that episodic task codes play an important role in keeping the cognitive system focused under a variety of performance constraints.

Journal ArticleDOI
TL;DR: A new model of free recall on the basis of M. Kahana's temporal context model and M. McClelland's leaky-accumulator decision model is presented, demonstrating that dissociations between short- and long-term recency can naturally arise from a model in which an internal contextual state is used as the sole cue for retrieval across time scales.
Abstract: The authors present a new model of free recall on the basis of M. W. Howard and M. J. Kahana's temporal context model and M. Usher and J. L. McClelland's leaky-accumulator decision model. In this model, contextual drift gives rise to both short-term and long-term recency effects, and contextual retrieval gives rise to short-term and long-term contiguity effects. Recall decisions are controlled by a race between competitive leaky accumulators. The model captures the dynamics of immediate, delayed, and continual distractor free recall, demonstrating that dissociations between short- and long-term recency can naturally arise from a model in which an internal contextual state is used as the sole cue for retrieval across time scales.

Journal ArticleDOI
TL;DR: The authors of the current article show how problems of how the cognitive system knows where to intervene when conflict is detected can be solved when cognitive control is implemented as a conflict-modulated Hebbian learning rule.
Abstract: The conflict monitoring model of M. M. Botvinick, T. S. Braver, D. M. Barch, C. S. Carter, and J. D. Cohen (2001) triggered several research programs investigating various aspects of cognitive control. One problematic aspect of the Botvinick et al. model is that there is no clear account of how the cognitive system knows where to intervene when conflict is detected. As a result, recent findings of task-specific and context-specific (e.g., item-specific) adaptation are difficult to interpret. The difficulty with item-specific adaptation was recently pointed out by C. Blais, S. Robidoux, E. F. Risko, and D. Besner (2007), who proposed an alternative model that could account for this. However, the same problem of where the cognitive system should intervene resurfaces in a different shape in this model, and it has difficulty in accounting for the Gratton effect, a hallmark item-nonspecific effect. The authors of the current article show how these problems can be solved when cognitive control is implemented as a conflict-modulated Hebbian learning rule.

Journal ArticleDOI
TL;DR: The model integrates disparate health and social psychology literatures to elucidate how the conscious and nonconscious awareness of death can influence the motivational orientation that is most operative in the context of health decisions.
Abstract: This article introduces a terror management health model (TMHM). The model integrates disparate health and social psychology literatures to elucidate how the conscious and nonconscious awareness of death can influence the motivational orientation that is most operative in the context of health decisions. Three formal propositions are presented. Proposition 1 suggests that conscious thoughts about death can instigate health-oriented responses aimed at removing death-related thoughts from current focal attention. Proposition 2 suggests that the unconscious resonance of death-related cognition promotes self-oriented defenses directed toward maintaining, not one's health, but a sense of meaning and self-esteem. The last proposition suggests that confrontations with the physical body may undermine symbolic defenses and thus present a previously unrecognized barrier to health promotion activities. In the context of each proposition, moderators are proposed, research is reviewed, and implications for health promotion are discussed.

Journal Article
TL;DR: The authors proposed a new model of free recall on the basis of M. Howard and M. J. McClelland's leaky-accumulator decision model, where recall decisions are controlled by a race between competitive leaky accumulators.
Abstract: The authors present a new model of free recall on the basis of M. W. Howard and M. J. Kahana's (2002a) temporal context model and M. Usher and J. L. McClelland's (2001) leaky-accumulator decision model. In this model, contextual drift gives rise to both short-term and long-term recency effects, and contextual retrieval gives rise to short-term and long-term contiguity effects. Recall decisions are controlled by a race between competitive leaky accumulators. The model captures the dynamics of immediate, delayed, and continual distractor free recall, demonstrating that dissociations between short- and long-term recency can naturally arise from a model in which an internal contextual state is used as the sole cue for retrieval across time scales.

Journal ArticleDOI
TL;DR: The Quadruple process model is a multinomial model that provides quantitative estimates of 4 distinct processes in a single task that offers insights into many central questions surrounding the operation and the interaction of automatic and controlled processes.
Abstract: The distinction between automatic processes and controlled processes is a central organizational theme across areas of psychology. However, this dichotomy conceals important differences among qualitatively different processes that independently contribute to ongoing behavior. The Quadruple process model is a multinomial model that provides quantitative estimates of 4 distinct processes in a single task: the likelihood that an automatic response tendency is activated; the likelihood that a contextually appropriate response can be determined; the likelihood that automatic response tendencies are overcome when necessary; and the likelihood that in the absence of other information, behavior is driven by a general response bias. The model integrates dual-process models from many domains of inquiry and offers a generalized, more nuanced framework of impulse regulation across these domains. The model offers insights into many central questions surrounding the operation and the interaction of automatic and controlled processes. Applications of the model to empirical and theoretical concerns in a variety of areas of psychology are discussed.

Journal ArticleDOI
TL;DR: The authors conclude that purely temporal views of forgetting are inadequate, and time-based decay, decreasing temporal distinctiveness, and interference are found to be inadequate.
Abstract: Three hypotheses of forgetting from immediate memory were tested: time-based decay, decreasing temporal distinctiveness, and interference. The hypotheses were represented by 3 models of serial recall: the primacy model, the SIMPLE (scale-independent memory, perception, and learning) model, and the SOB (serial order in a box) model, respectively. The models were fit to 2 experiments investigating the effect of filled delays between items at encoding or at recall. Short delays between items, filled with articulatory suppression, led to massive impairment of memory relative to a no-delay baseline. Extending the delays had little additional effect, suggesting that the passage of time alone does not cause forgetting. Adding a choice reaction task in the delay periods to block attention-based rehearsal did not change these results. The interference-based SOB fit the data best; the primacy model overpredicted the effect of lengthening delays, and SIMPLE was unable to explain the effect of delays at encoding. The authors conclude that purely temporal views of forgetting are inadequate.

Journal ArticleDOI
TL;DR: The authors conducted a comprehensive review of the 5 most prominent observer models through the development of a common formalism and found the perceptual template model provided the best account of all the empirical data in the visual domain.
Abstract: External noise methods and observer models have been widely used to characterize the intrinsic perceptual limitations of human observers and changes of the perceptual limitations associated with cognitive, developmental, and disease processes by highlighting the variance of internal representations. The authors conducted a comprehensive review of the 5 most prominent observer models through the development of a common formalism. They derived new predictions of the models for a common set of behavioral tests that were compared with the data in the literature and a new experiment. The comparison between the model predictions and the empirical data resulted in very strong constraints on the observer models. The perceptual template model provided the best account of all the empirical data in the visual domain. The choice of the observer model has significant implications for the interpretation of data from other external noise paradigms, as well as studies using external noise to assay changes of perceptual limitations associated with observer states. The empirical and theoretical development suggests possible parallel developments in other sensory modalities and studies of high-level cognitive processes.

Journal ArticleDOI
TL;DR: Evidence is reviewed indicating that evolutionary pressure for cooperation may be a critical adaptive function accounting for the evolution of explicit processing, and research in the areas of aggression, ethnocentrism, sexuality, reward seeking, and emotion regulation is reviewed.
Abstract: This article analyzes the effortful control of automatic processing related to social and emotional behavior, including control over evolved modules designed to solve problems of survival and reproduction that were recurrent over evolutionary time. The inputs to effortful control mechanisms include a wide range of nonrecurrent information--information resulting not from evolutionary regularities but from explicit appraisals of costs and benefits. Effortful control mechanisms are associated with the ventromedial prefrontal cortex and the ventral anterior cingulated cortex. These mechanisms are largely separate from mechanisms of cognitive control (termed executive function) and working memory, and they enable effortful control of behavior in the service of long range goals. Individual differences in effortful control are associated with measures of conscientiousness in the Five Factor Model of personality. Research in the areas of aggression, ethnocentrism, sexuality, reward seeking, and emotion regulation is reviewed indicating effortful control of automatic, implicit processing based on explicit appraisals of the context. Evidence is reviewed indicating that evolutionary pressure for cooperation may be a critical adaptive function accounting for the evolution of explicit processing.

Journal ArticleDOI
TL;DR: The findings suggest that sampling rates typically used by developmental researchers may be inadequate to accurately depict patterns of variability and the shape of developmental change, and therefore may seriously compromise theories of development.
Abstract: Developmental trajectories provide the empirical foundation for theories about change processes during development. However, the ability to distinguish among alternative trajectories depends on how frequently observations are sampled. This study used real behavioral data, with real patterns of variability, to examine the effects of sampling at different intervals on characterization of the underlying trajectory. Data were derived from a set of 32 infant motor skills indexed daily during the first 18 months. Larger sampling intervals (2-31 days) were simulated by systematically removing observations from the daily data and interpolating over the gaps. Infrequent sampling caused decreasing sensitivity to fluctuations in the daily data: Variable trajectories erroneously appeared as step functions, and estimates of onset ages were increasingly off target. Sensitivity to variation decreased as an inverse power function of sampling interval, resulting in severe degradation of the trajectory with intervals longer than 7 days. These findings suggest that sampling rates typically used by developmental researchers may be inadequate to accurately depict patterns of variability and the shape of developmental change. Inadequate sampling regimes therefore may seriously compromise theories of development.

Journal ArticleDOI
TL;DR: A Bayesian model of causal learning that incorporates generic priors--systematic assumptions about abstract properties of a system of cause-effect relations that explains why human judgments of causal structure are influenced more by causal power and the base rate of the effect and less by sample size is presented.
Abstract: The article presents a Bayesian model of causal learning that incorporates generic priors—systematic assumptions about abstract properties of a system of cause–effect relations. The proposed generic priors for causal learning favor sparse and strong (SS) causes—causes that are few in number and high in their individual powers to produce or prevent effects. The SS power model couples these generic priors with a causal generating function based on the assumption that unobservable causal influences on an effect operate independently (P. W. Cheng, 1997). The authors tested this and other Bayesian models, as well as leading nonnormative models, by fitting multiple data sets in which several parameters were varied parametrically across multiple types of judgments. The SS power model accounted for data concerning judgments of both causal strength and causal structure (whether a causal link exists). The model explains why human judgments of causal structure (relative to a Bayesian model lacking these generic priors) are influenced more by causal power and the base rate of the effect and less by sample size. Broader implications of the Bayesian framework for human learning are discussed.

Journal ArticleDOI
TL;DR: A general model of human judgment is introduced aimed at describing how people generate hypotheses from memory and how these hypotheses serve as the basis of probability judgment and hypothesis testing.
Abstract: Diagnostic hypothesis-generation processes are ubiquitous in human reasoning. For example, clinicians generate disease hypotheses to explain symptoms and help guide treatment, auditors generate hypotheses for identifying sources of accounting errors, and laypeople generate hypotheses to explain patterns of information (i.e., data) in the environment. The authors introduce a general model of human judgment aimed at describing how people generate hypotheses from memory and how these hypotheses serve as the basis of probability judgment and hypothesis testing. In 3 simulation studies, the authors illustrate the properties of the model, as well as its applicability to explaining several common findings in judgment and decision making, including how errors and biases in hypothesis generation can cascade into errors and biases in judgment.

Journal ArticleDOI
TL;DR: The author concludes there is insufficient evidence for the RK task to be used to identify qualitatively different memory components, and the author conducts a state-trace analysis to determine the dimensionality of theRK task.
Abstract: This article addresses the issue of whether the remember-know (RK) task is best explained by a single-process or a dual-process model. All single-process models propose that remember and know responses reflect different levels of a single strength-of-evidence dimension. Thus, across conditions in which response criteria are held constant, these models predict that the RK task is unidimensional. Many dual-process models propose that remember and know responses reflect two qualitatively distinct processes underlying recognition memory, often characterized as recollection and familiarity. These models predict that the RK task is bidimensional. Using data from 37 studies, the author conducted a state-trace analysis to determine the dimensionality of the RK task. In those studies, non-memory-related differences between conditions were eliminated via decision criteria constrained to be constant across all levels of the independent variables. The results reveal little or no evidence of bidimensionality and lend additional support to the unequal-variance signal detection model. Other arguments supporting a bidimensional interpretation are examined, and the author concludes there is insufficient evidence for the RK task to be used to identify qualitatively different memory components.

Journal ArticleDOI
TL;DR: Brandstatter, Gigerenzer, and Hertwig as discussed by the authors examined a recently proposed choice strategy, the priority heuristic, which provides a novel account of how people make risky choices and identified a number of properties that the priority-heuristic should have as a process model and illustrate how they may be tested.
Abstract: Comments on the article by E. Brandstatter, G. Gigerenzer, and R. Hertwig. Resolution of debates in cognition usually comes from the introduction of constraints in the form of new data about either the process or representation. Decision research, in contrast, has relied predominantly on testing models by examining their fit to choices. The authors examine a recently proposed choice strategy, the priority heuristic, which provides a novel account of how people make risky choices. The authors identify a number of properties that the priority heuristic should have as a process model and illustrate how they may be tested. The results, along with prior research, suggest that although the priority heuristic captures some variability in the attention paid to outcomes, it fails to account for major characteristics of the data, particularly the frequent transitions between outcomes and their probabilities. The article concludes with a discussion of the properties that should be captured by process models of risky choice and the role of process data in theory development.

Journal ArticleDOI
TL;DR: The authors of the present article specify minimal criteria for psychological plausibility, describe some genuine challenges in the study of heuristics, and conclude that fast and frugal heuristic are psychologically plausible: They use limited search and are tractable and robust.
Abstract: M. R. Dougherty, A. M. Franco-Watkins, and R. Thomas (2008) conjectured that fast and frugal heuristics need an automatic frequency counter for ordering cues. In fact, only a few heuristics order cues, and these orderings can arise from evolutionary, social, or individual learning, none of which requires automatic frequency counting. The idea that cue validities cannot be computed because memory does not encode missing information is misinformed; it implies that measures of co-occurrence are incomputable and would invalidate most theories of cue learning. They also questioned the recognition heuristic's psychological plausibility on the basis of their belief that it has not been implemented in a memory model, although it actually has been implemented in ACT-R (L. J. Schooler & R. Hertwig, 2005). On the positive side, M. R. Dougherty et al. discovered a new mechanism for a less-is-more effect. The authors of the present article specify minimal criteria for psychological plausibility, describe some genuine challenges in the study of heuristics, and conclude that fast and frugal heuristics are psychologically plausible: They use limited search and are tractable and robust.

Journal ArticleDOI
TL;DR: Gigerenzer et al. as mentioned in this paper evaluated the psychological plausibility of the assumptions upon which PMM were built and, consequently, concluded that fast and frugal heuristics are, in fact, psychologically implausible.
Abstract: The theory of probabilistic mental models (PMM; G. Gigerenzer, U. Hoffrage, & H. Kleinbolting, 1991) has had a major influence on the field of judgment and decision making, with the most recent important modifications to PMM theory being the identification of several fast and frugal heuristics (G. Gigerenzer & D. G. Goldstein, 1996). These heuristics were purported to provide psychologically plausible cognitive process models that describe a variety of judgment behavior. In this article, the authors evaluate the psychological plausibility of the assumptions upon which PMM were built and, consequently, the psychological plausibility of several of the fast and frugal heuristics. The authors argue that many of PMM theory's assumptions are questionable, given available data, and that fast and frugal heuristics are, in fact, psychologically implausible.

Journal ArticleDOI
TL;DR: The authors present a formalization and extension of the comparator hypothesis, which results in sharpened differentiation between it and the new learning-focused models.
Abstract: Cue competition is one of the most studied phenomena in associative learning. However, a theoretical disagreement has long stood over whether it reflects a learning or performance deficit. The comparator hypothesis, a model of expression of Pavlovian associations, posits that learning is not subject to competition but that performance reflects a complex interaction of encoded associative strengths. That is, subjects respond to a cue to the degree that it signals a change in the likelihood or magnitude of reinforcement relative to that in the cue's absence. Initially, this performance-focused view was supported by studies showing that posttraining revaluation of a competing cue often influences responding to the target cue. However, recently developed learning-focused accounts of retrospective revaluation have revitalized the debate concerning cue competition. Further complicating the picture are phenomena of cue facilitation, which have been addressed less frequently than cue competition by formal models of conditioning of either class. The authors present a formalization and extension of the comparator hypothesis, which results in sharpened differentiation between it and the new learning-focused models.

Journal ArticleDOI
TL;DR: The LIST PARSE neural model quantitatively simulates human cognitive data about immediate serial recall and free recall, and monkey neurophysiological data from the prefrontal cortex obtained during sequential sensory-motor imitation and planned performance, and clarifies why spatial and non-spatial working memories share the same type of circuit design.
Abstract: How does the brain carry out working memory storage, categorization, and voluntary performance of event sequences? The LIST PARSE neural model proposes an answer that unifies the explanation of cognitive, neurophysiological, and anatomical data. It quantitatively simulates human cognitive data about immediate serial recall and free recall, and monkey neurophysiological data from the prefrontal cortex obtained during sequential sensory-motor imitation and planned performance. The model clarifies why spatial and non-spatial working memories share the same type of circuit design. It proposes how laminar circuits of lateral prefrontal cortex carry out working memory storage of event sequences within layers 6 and 4, how these event sequences are unitized through learning into list chunks within layer 2/3, and how these stored sequences can be recalled at variable rates that are under volitional control by the basal ganglia. These laminar prefrontal circuits are variations of visual cortical circuits that explained data about how the brain sees. These examples from visual and prefrontal cortex illustrate how laminar neocortex can represent both spatial and temporal information, and open the way towards understanding how other behaviors derive from shared laminar neocortical designs.

Journal ArticleDOI
TL;DR: A probabilistic model of grouping by proximity, which allows measurement of strength on a ratio scale and shows that the strength of the conjoint effect of 2 grouping principles--grouping by proximity and grouping by similarity--is equal to the sum of their separate effects.
Abstract: The authors investigated whether the gestalt grouping principles can be quantified and whether the conjoint effects of two grouping principles operating at the same time on the same stimuli differ from the sum of their individual effects. After reviewing earlier attempts to discover how grouping principles interact, they developed a probabilistic model of grouping by proximity, which allows measurement of strength on a ratio scale. Then, in 3 experiments using dot lattices, they showed that the strength of the conjoint effect of 2 grouping principles--grouping by proximity and grouping by similarity--is equal to the sum of their separate effects. They propose a physiologically plausible model of this law.