scispace - formally typeset
Search or ask a question

Showing papers in "Psychological Review in 2017"


Journal ArticleDOI
TL;DR: A general Bayesian framework in which self-evaluation is cast as a “second-order” inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor is presented.
Abstract: People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a "second-order" inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one's own actions to metacognitive judgments. In addition, the model provides insight into why subjects' metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. (PsycINFO Database Record

359 citations


Journal ArticleDOI
TL;DR: The theory begins by specifying basic needs and by suggesting how, as people pursue need-fulfilling goals, they build mental representations of their experiences that serve as the basis of both motivation and personality.
Abstract: Drawing on both classic and current approaches, I propose a theory that integrates motivation, personality, and development within one framework, using a common set of principles and mechanisms. The theory begins by specifying basic needs and by suggesting how, as people pursue need-fulfilling goals, they build mental representations of their experiences (beliefs, representations of emotions, and representations of action tendencies). I then show how these needs, goals, and representations can serve as the basis of both motivation and personality, and can help to integrate disparate views of personality. The article builds on this framework to provide a new perspective on development, particularly on the forces that propel development and the roles of nature and nurture. I argue throughout that the focus on representations provides an important entry point for change and growth. (PsycINFO Database Record

288 citations


Journal ArticleDOI
TL;DR: It is proposed that ingroup identification, ingroup norms and goals, and collective efficacy determine environmental appraisals as well as both private and public sphere environmental action.
Abstract: Large-scale environmental crises are genuinely collective phenomena: they usually result from collective, rather than personal, behavior and how they are cognitively represented and appraised is determined by collectively shared interpretations (e.g., differing across ideological groups) and based on concern for collectives (e.g., humankind, future generations) rather than for individuals. Nevertheless, pro-environmental action has been primarily investigated as a personal decision-making process. We complement this research with a social identity perspective on pro-environmental action. Social identity is the human capacity to define the self in terms of "We" instead of "I," enabling people to think and act as collectives, which should be crucial given personal insufficiency to appraise and effectively respond to environmental crises. We propose a Social Identity Model of Pro-Environmental Action (SIMPEA) of how social identity processes affect both appraisal of and behavioral responses to large-scale environmental crises. We review related and pertinent research providing initial evidence for the role of 4 social identity processes hypothesized in SIMPEA. Specifically, we propose that ingroup identification, ingroup norms and goals, and collective efficacy determine environmental appraisals as well as both private and public sphere environmental action. These processes are driven by personal and collective emotions and motivations that arise from environmental appraisal and operate on both a deliberate and automatic processing level. Finally, we discuss SIMPEA's implications for the research agenda in environmental and social psychology and for interventions fostering pro-environmental action. (PsycINFO Database Record

247 citations


Journal ArticleDOI
TL;DR: The model was fit to data from 4 continuous-reproduction experiments testing working memory for colors or orientations better than 2 competing models, the Slot-Averaging model and the Variable-Precision resource model, and fared well in comparison to several new models incorporating alternative theoretical assumptions.
Abstract: The article introduces an interference model of working memory for information in a continuous similarity space, such as the features of visual objects. The model incorporates the following assumptions: (a) Probability of retrieval is determined by the relative activation of each retrieval candidate at the time of retrieval; (b) activation comes from 3 sources in memory: cue-based retrieval using context cues, context-independent memory for relevant contents, and noise; (c) 1 memory object and its context can be held in the focus of attention, where it is represented with higher precision, and partly shielded against interference. The model was fit to data from 4 continuous-reproduction experiments testing working memory for colors or orientations. The experiments involved variations of set size, kind of context cues, precueing, and retro-cueing of the to-be-tested item. The interference model fit the data better than 2 competing models, the Slot-Averaging model and the Variable-Precision resource model. The interference model also fared well in comparison to several new models incorporating alternative theoretical assumptions. The experiments confirm 3 novel predictions of the interference model: (a) Nontargets intrude in recall to the extent that they are close to the target in context space; (b) similarity between target and nontarget features improves recall, and (c) precueing-but not retro-cueing-the target substantially reduces the set-size effect. The success of the interference model shows that working memory for continuous visual information works according to the same principles as working memory for more discrete (e.g., verbal) contents. Data and model codes are available at https://osf.io/wgqd5/. (PsycINFO Database Record

218 citations


Journal ArticleDOI
TL;DR: The Oxford Utilitarianism Scale as discussed by the authors ) is a scale that measures individual differences in the "negative" (permissive attitude toward instrumental harm) and "positive" (impartial concern for the greater good) dimensions of utilitarian thinking as manifested in the general population.
Abstract: Recent research has relied on trolley-type sacrificial moral dilemmas to study utilitarian versus nonutilitarian modes of moral decision-making. This research has generated important insights into people’s attitudes toward instrumental harm—that is, the sacrifice of an individual to save a greater number. But this approach also has serious limitations. Most notably, it ignores the positive, altruistic core of utilitarianism, which is characterized by impartial concern for the well-being of everyone, whether near or far. Here, we develop, refine, and validate a new scale—the Oxford Utilitarianism Scale—to dissociate individual differences in the ‘negative’ (permissive attitude toward instrumental harm) and ‘positive’ (impartial concern for the greater good) dimensions of utilitarian thinking as manifested in the general population. We show that these are two independent dimensions of proto-utilitarian tendencies in the lay population, each exhibiting a distinct psychological profile. Empathic concern, identification with the whole of humanity, and concern for future generations were positively associated with impartial beneficence but negatively associated with instrumental harm; and although instrumental harm was associated with subclinical psychopathy, impartial beneficence was associated with higher religiosity. Importantly, although these two dimensions were independent in the lay population, they were closely associated in a sample of moral philosophers. Acknowledging this dissociation between the instrumental harm and impartial beneficence components of utilitarian thinking in ordinary people can clarify existing debates about the nature of moral psychology and its relation to moral philosophy as well as generate fruitful avenues for further research.

176 citations


Journal ArticleDOI
TL;DR: A new theoretical framework is proposed, the gesture-for-conceptualization hypothesis, which explains the self-oriented functions of representational gestures, which are generated from the same system that generates practical actions, such as object manipulation; however, gestures are distinct from practical actions in that they represent information.
Abstract: People spontaneously produce gestures during speaking and thinking. We focus here on gestures that depict or indicate information related to the contents of concurrent speech or thought (i.e., representational gestures). Previous research indicates that such gestures have not only communicative functions, but also self-oriented cognitive functions. In this paper, we propose a new theoretical framework, the Gesture-for-Conceptualization Hypothesis, which explains the self-oriented functions of representational gestures. According to this framework, representational gestures affect cognitive processes in four main ways: gestures activate, manipulate, package and explore spatio-motoric representations for speaking and thinking. These four functions are shaped by gesture’s ability to schematize information, that is, to focus on a small subset of available information that is potentially relevant to the task at hand. The framework is based on the assumption that gestures are generated from the same system that generates practical actions, such as object manipulation; however, gestures are distinct from practical actions in that they represent information. The framework provides a novel, parsimonious and comprehensive account of the self-oriented functions of gestures. We discuss how the framework accounts for gestures that depict abstract or metaphoric content, and we consider implications for the relations between self-oriented and communicative functions of gestures.

170 citations


Journal ArticleDOI
TL;DR: This article analyzes 14 choice anomalies that have been described by different models, including the Allais, St. Petersburg, and Ellsberg paradoxes, and uses a choice prediction competition methodology to clarify the interaction between the different anomalies.
Abstract: Experimental studies of choice behavior document distinct, and sometimes contradictory, deviations from maximization. For example, people tend to overweight rare events in 1-shot decisions under risk, and to exhibit the opposite bias when they rely on past experience. The common explanations of these results assume that the contradicting anomalies reflect situation-specific processes that involve the weighting of subjective values and the use of simple heuristics. The current article analyzes 14 choice anomalies that have been described by different models, including the Allais, St. Petersburg, and Ellsberg paradoxes, and the reflection effect. Next, it uses a choice prediction competition methodology to clarify the interaction between the different anomalies. It focuses on decisions under risk (known payoff distributions) and under ambiguity (unknown probabilities), with and without feedback concerning the outcomes of past choices. The results demonstrate that it is not necessary to assume situation-specific processes. The distinct anomalies can be captured by assuming high sensitivity to the expected return and 4 additional tendencies: pessimism, bias toward equal weighting, sensitivity to payoff sign, and an effort to minimize the probability of immediate regret. Importantly, feedback increases sensitivity to probability of regret. Simple abstractions of these assumptions, variants of the model Best Estimate and Sampling Tools (BEAST), allow surprisingly accurate ex ante predictions of behavior. Unlike the popular models, BEAST does not assume subjective weighting functions or cognitive shortcuts. Rather, it assumes the use of sampling tools and reliance on small samples, in addition to the estimation of the expected values. (PsycINFO Database Record

137 citations


Journal ArticleDOI
TL;DR: This model can explain not only people’s availability bias in judging the frequency of extreme events but also a wide range of cognitive biases in decisions from experience, decisions from description, and memory recall.
Abstract: People's decisions and judgments are disproportionately swayed by improbable but extreme eventualities, such as terrorism, that come to mind easily. This article explores whether such availability biases can be reconciled with rational information processing by taking into account the fact that decision makers value their time and have limited cognitive resources. Our analysis suggests that to make optimal use of their finite time decision makers should overrepresent the most important potential consequences relative to less important, put potentially more probable, outcomes. To evaluate this account, we derive and test a model we call utility-weighted sampling. Utility-weighted sampling estimates the expected utility of potential actions by simulating their outcomes. Critically, outcomes with more extreme utilities have a higher probability of being simulated. We demonstrate that this model can explain not only people's availability bias in judging the frequency of extreme events but also a wide range of cognitive biases in decisions from experience, decisions from description, and memory recall. (PsycINFO Database Record

124 citations


Journal ArticleDOI
TL;DR: The model builds on the claim that analogical reasoning lies at the heart of visual problem solving, and intelligence more broadly, and shows that model operations involving abstraction and rerepresentation are particularly difficult for people, suggesting that these operations may be critical for performing visual problem solve, and reasoning more generally, at the highest level.
Abstract: We present a computational model of visual problem solving, designed to solve problems from the Raven's Progressive Matrices intelligence test. The model builds on the claim that analogical reasoning lies at the heart of visual problem solving, and intelligence more broadly. Images are compared via structure mapping, aligning the common relational structure in 2 images to identify commonalities and differences. These commonalities or differences can themselves be reified and used as the input for future comparisons. When images fail to align, the model dynamically rerepresents them to facilitate the comparison. In our analysis, we find that the model matches adult human performance on the Standard Progressive Matrices test, and that problems which are difficult for the model are also difficult for people. Furthermore, we show that model operations involving abstraction and rerepresentation are particularly difficult for people, suggesting that these operations may be critical for performing visual problem solving, and reasoning more generally, at the highest level. (PsycINFO Database Record

95 citations


Journal ArticleDOI
TL;DR: It is shown that deliberate ignorance exists, is related to risk aversion, and can be explained as avoiding anticipatory regret.
Abstract: Ignorance is generally pictured as an unwanted state of mind, and the act of willful ignorance may raise eyebrows. Yet people do not always want to know, demonstrating a lack of curiosity at odds with theories postulating a general need for certainty, ambiguity aversion, or the Bayesian principle of total evidence. We propose a regret theory of deliberate ignorance that covers both negative feelings that may arise from foreknowledge of negative events, such as death and divorce, and positive feelings of surprise and suspense that may arise from foreknowledge of positive events, such as knowing the sex of an unborn child. We conduct the first representative nationwide studies to estimate the prevalence and predictability of deliberate ignorance for a sample of 10 events. Its prevalence is high: Between 85% and 90% of people would not want to know about upcoming negative events, and 40% to 70% prefer to remain ignorant of positive events. Only 1% of participants consistently wanted to know. We also deduce and test several predictions from the regret theory: Individuals who prefer to remain ignorant are more risk averse and more frequently buy life and legal insurance. The theory also implies the time-to-event hypothesis, which states that for the regret-prone, deliberate ignorance is more likely the nearer the event approaches. We cross-validate these findings using 2 representative national quota samples in 2 European countries. In sum, we show that deliberate ignorance exists, is related to risk aversion, and can be explained as avoiding anticipatory regret. (PsycINFO Database Record

94 citations


Journal ArticleDOI
TL;DR: It is found that semantic relatedness, as quantified by these models, is able to provide a good measure of the associations involved in judgment, and, in turn, predict responses in a large number of existing and novel judgment tasks.
Abstract: I study associative processing in high-level judgment using vector space semantic models. I find that semantic relatedness, as quantified by these models, is able to provide a good measure of the associations involved in judgment, and, in turn, predict responses in a large number of existing and novel judgment tasks. My results shed light on the representations underlying judgment, and highlight the close relationship between these representations and those at play in language and in the assessment of word meaning. In doing so, they show how one of the best-known and most studied theories in decision making research can be formalized to make quantitative a priori predictions, and how this theory can be rigorously tested on a wide range of natural language judgment problems. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: This model provides not only an account of when the eyes move, but also what will be fixated, and an analysis of saccade timing alone enables us to predict where people look in a scene.
Abstract: Many of our actions require visual information, and for this it is important to direct the eyes to the right place at the right time. Two or three times every second, we must decide both when and where to direct our gaze. Understanding these decisions can reveal the moment-to-moment information priorities of the visual system and the strategies for information sampling employed by the brain to serve ongoing behavior. Most theoretical frameworks and models of gaze control assume that the spatial and temporal aspects of fixation point selection depend on different mechanisms. We present a single model that can simultaneously account for both when and where we look. Underpinning this model is the theoretical assertion that each decision to move the eyes is an evaluation of the relative benefit expected from moving the eyes to a new location compared with that expected by continuing to fixate the current target. The eyes move when the evidence that favors moving to a new location outweighs that favoring staying at the present location. Our model provides not only an account of when the eyes move, but also what will be fixated. That is, an analysis of saccade timing alone enables us to predict where people look in a scene. Indeed our model accounts for fixation selection as well as (and often better than) current computational models of fixation selection in scene viewing. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: It is argued that prefrontal cortex ontogenetic functional development is best understood through an ecological lens, and this ecological account of PFC functional development provides novel insights into the mechanisms of developmental change, including its catalysts and influences.
Abstract: In this paper, we argue that prefrontal cortex ontogenetic functional development is best understood through an ecological lens. We first begin by reviewing evidence supporting the existing consensus that PFC structural and functional development is protracted based on maturational constraints. We then examine recent findings from neuroimaging studies in infants, early life stress research, and connectomics that support the novel hypothesis that PFC functional development is driven by reciprocal processes of neural adaptation and niche construction. We discuss implications and predictions of this model for redefining the construct of executive functions and for informing typical and atypical child development. This ecological account of PFC functional development moves beyond descriptions of development that are characteristic of existing frameworks, and provides novel insights into the mechanisms of developmental change, including its catalysts and influences. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: This work presents the first model to jointly account for errors and confidence ratings in VWM and could lay the groundwork for understanding the computational mechanisms of metacognition.
Abstract: Although visual working memory (VWM) has been studied extensively, it is unknown how people form confidence judgments about their memories. Peirce (1878) speculated that Fechner's law-which states that sensation is proportional to the logarithm of stimulus intensity-might apply to confidence reports. Based on this idea, we hypothesize that humans map the precision of their VWM contents to a confidence rating through Fechner's law. We incorporate this hypothesis into the best available model of VWM encoding and fit it to data from a delayed-estimation experiment. The model provides an excellent account of human confidence rating distributions as well as the relation between performance and confidence. Moreover, the best-fitting mapping in a model with a highly flexible mapping closely resembles the logarithmic mapping, suggesting that no alternative mapping exists that accounts better for the data than Fechner's law. We propose a neural implementation of the model and find that this model also fits the behavioral data well. Furthermore, we find that jointly fitting memory errors and confidence ratings boosts the power to distinguish previously proposed VWM encoding models by a factor of 5.99 compared to fitting only memory errors. Finally, we show that Fechner's law also accounts for metacognitive judgments in a word recognition memory task, which is a first indication that it may be a general law in metacognition. Our work presents the first model to jointly account for errors and confidence ratings in VWM and could lay the groundwork for understanding the computational mechanisms of metacognition. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: A recently developed method is used to fit the four extant models of context effects to data from two experiments: one involving consumer goods stimuli, and another involving perceptual stimuli, which highlights the notion that mathematical tractability, while certainly a convenient feature of any model, should neither be the primary impetus for model development nor the promoting or demotion of specific model mechanisms.
Abstract: In accounting for phenomena present in preferential choice experiments, modern models assume a wide array of different mechanisms such as lateral inhibition, leakage, loss aversion, and saliency. These mechanisms create interesting predictions for the dynamics of the deliberation process as well as the aggregate behavior of preferential choice in a variety of contexts. However, the models that embody these different mechanisms are rarely subjected to rigorous quantitative tests of suitability by way of model fitting and evaluation. Recently, complex, stochastic models have been cast aside in favor of simpler approximations, which may or may not capture the data as well. In this article, we use a recently developed method to fit the four extant models of context effects to data from two experiments: one involving consumer goods stimuli, and another involving perceptual stimuli. Our third study investigates the relative merits of the mechanisms currently assumed by the extant models of context effects by testing every possible configuration of mechanism within one overarching model. Across all tasks, our results emphasize the importance of several mechanisms such as lateral inhibition, loss aversion, and pairwise attribute differences, as these mechanisms contribute positively to model performance. Together, our results highlight the notion that mathematical tractability, while certainly a convenient feature of any model, should neither be the primary impetus for model development nor the promoting or demotion of specific model mechanisms. Instead, model fit, balanced with model complexity, should be the greatest burden to bear for any theoretical account of empirical phenomena. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: A “two route” neurocognitive model of tool use is presented called the “Two Action Systems Plus (2AS+)” framework that posits a complementary role for online and stored information and specifies the neuroc cognitive substrates of task-relevant action selection.
Abstract: The reasoning-based approach championed by Francois Osiurak and Arnaud Badets (Osiurak & Badets, 2016) denies the existence of sensory-motor memories of tool use except in limited circumstances, and suggests instead that most tool use is subserved solely by online technical reasoning about tool properties. In this commentary, I highlight the strengths and limitations of the reasoning-based approach and review a number of lines of evidence that manipulation knowledge is in fact used in tool action tasks. In addition, I present a "two route" neurocognitive model of tool use called the "Two Action Systems Plus (2AS+)" framework that posits a complementary role for online and stored information and specifies the neurocognitive substrates of task-relevant action selection. This framework, unlike the reasoning based approach, has the potential to integrate the existing psychological and functional neuroanatomic data in the tool use domain. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: This paper argues that there are 2 qualitatively different ways in which a Bayesian model could be constructed, and demonstrates how the descriptive Bayesian approach can be used to answer different sorts of questions than the optimal approach, especially when combined with principled tools for model evaluation and model selection.
Abstract: Recent debates in the psychological literature have raised questions about the assumptions that underpin Bayesian models of cognition and what inferences they license about human cognition. In this paper we revisit this topic, arguing that there are 2 qualitatively different ways in which a Bayesian model could be constructed. The most common approach uses a Bayesian model as a normative standard upon which to license a claim about optimality. In the alternative approach, a descriptive Bayesian model need not correspond to any claim that the underlying cognition is optimal or rational, and is used solely as a tool for instantiating a substantive psychological theory. We present 3 case studies in which these 2 perspectives lead to different computational models and license different conclusions about human cognition. We demonstrate how the descriptive Bayesian approach can be used to answer different sorts of questions than the optimal approach, especially when combined with principled tools for model evaluation and model selection. More generally we argue for the importance of making a clear distinction between the 2 perspectives. Considerable confusion results when descriptive models and optimal models are conflated, and if Bayesians are to avoid contributing to this confusion it is important to avoid making normative claims when none are intended. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: A computational model of the Stroop task is presented, which includes the resolution of task conflict and its modulation by proactive control, and accounts for the variability in Stroop-RF reported in the experimental literature and solves a challenge to previous Stroop models—their ability to account for reaction time distributional properties.
Abstract: The Stroop task is a central experimental paradigm used to probe cognitive control by measuring the ability of participants to selectively attend to task-relevant information and inhibit automatic task-irrelevant responses. Research has revealed variability in both experimental manipulations and individual differences. Here, we focus on a particular source of Stroop variability, the reverse-facilitation (RF; faster responses to nonword neutral stimuli than to congruent stimuli), which has recently been suggested as a signature of task conflict. We first review the literature that shows RF variability in the Stroop task, both with regard to experimental manipulations and to individual differences. We suggest that task conflict variability can be understood as resulting from the degree of proactive control that subjects recruit in advance of the Stroop stimulus. When the proactive control is high, task conflict does not arise (or is resolved very quickly), resulting in regular Stroop facilitation. When proactive control is low, task conflict emerges, leading to a slow-down in congruent and incongruent (but not in neutral) trials and thus to Stroop RF. To support this suggestion, we present a computational model of the Stroop task, which includes the resolution of task conflict and its modulation by proactive control. Results show that our model (a) accounts for the variability in Stroop-RF reported in the experimental literature, and (b) solves a challenge to previous Stroop models-their ability to account for reaction time distributional properties. Finally, we discuss theoretical implications to Stroop measures and control deficits observed in some psychopathologies. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: This work considers pairwise choices among simple lotteries and the hypotheses of overweighting or underweighting of small probabilities, as well as the description–experience gap, and discusses ways to avoid reasoning fallacies in bridging the conceptual gap between hypothetical constructs.
Abstract: Behavioral decision research compares theoretical constructs like preferences to behavior such as observed choices Three fairly common links from constructs to behavior are (1) to tally, across participants and decision problems, the number of choices consistent with one predicted pattern of pairwise preferences; (2) to compare what most people choose in each decision problem against a predicted preference pattern; or (3) to enumerate the decision problems in which two experimental conditions generate a 1-sided significant difference in choice frequency 'consistent' with the theory Although simple, these theoretical links are heuristics They are subject to well-known reasoning fallacies, most notably the fallacy of sweeping generalization and the fallacy of composition No amount of replication can alleviate these fallacies On the contrary, reiterating logically inconsistent theoretical reasoning over and again across studies obfuscates science As a case in point, we consider pairwise choices among simple lotteries and the hypotheses of overweighting or underweighting of small probabilities, as well as the description-experience gap We discuss ways to avoid reasoning fallacies in bridging the conceptual gap between hypothetical constructs, such as, for example, "overweighting" to observable pairwise choice data Although replication is invaluable, successful replication of hard-to-interpret results is not Behavioral decision research stands to gain much theoretical and empirical clarity by spelling out precise and formally explicit theories of how hypothetical constructs translate into observable behavior (PsycINFO Database Record

Journal ArticleDOI
TL;DR: A dynamic model of memory is presented that integrates the processes of perception, retrieval from knowledge, retrieval of events, and decision making as these evolve from 1 moment to the next, revealing how the same set of core dynamic principles can help unify otherwise disparate phenomena in the study of memory.
Abstract: Thesis (Ph.D.) - Indiana University,Psychological and Brain Sciences/Cognitive Science, 2015

Journal ArticleDOI
TL;DR: It is shown that computation of sex and race can emerge incidentally from a system designed to compute identity, and that a remarkably small number of identities need be learnt before such incidental dimensions emerge.
Abstract: Viewers are highly accurate at recognizing sex and race from faces-though it remains unclear how this is achieved. Recognition of familiar faces is also highly accurate across a very large range of viewing conditions, despite the difficulty of the problem. Here we show that computation of sex and race can emerge incidentally from a system designed to compute identity. We emphasize the role of multiple encounters with a small number of people, which we take to underlie human face learning. We use highly variable everyday 'ambient' images of a few people to train a Linear Discriminant Analysis (LDA) model on identity. The resulting model has human-like properties, including a facility to cohere previously unseen ambient images of familiar (trained) people-an ability which breaks down for the faces of unknown (untrained) people. The first dimension created by the identity-trained LDA classifies both familiar and unfamiliar faces by sex, and the second dimension classifies faces by race- even though neither of these categories was explicitly coded at learning. By varying the numbers and types of face identities on which a further series of LDA models were trained, we show that this incidental learning of sex and race reflects covariation between these social categories and face identity, and that a remarkably small number of identities need be learnt before such incidental dimensions emerge. The task of learning to recognize familiar faces is sufficient to create certain salient social categories.

Journal ArticleDOI
TL;DR: Cycles emerge in which the prevalence of each type of processing in the population oscillates between 2 extremes, and it is speculated that this observation may have relevance for understanding similar cycles across human history, and may lend insight into some of the circumstances and challenges currently faced by the authors' species.
Abstract: Psychologists, neuroscientists, and economists often conceptualize decisions as arising from processes that lie along a continuum from automatic (i.e., “hardwired” or over-learned, but relatively inflexible) to controlled (less efficient and effortful, but more flexible). Control is central to human cognition, and plays a key role in our ability to modify the world to suit our needs. Given its advantages, reliance on controlled processing may seem predestined to increase within the population over time. Here, we examine whether this is so by introducing an evolutionary game theoretic model of agents that vary in their use of automatic versus controlled processes, and in which cognitive processing modifies the environment in which the agents interact. We find that, under a wide range of parameters and model assumptions, cycles emerge in which the prevalence of each type of processing in the population oscillates between two extremes. Rather than inexorably increasing, the emergence of control often creates conditions that lead to its own demise by allowing automaticity to also flourish, thereby undermining the progress made by the initial emergence of controlled processing. We speculate that this observation may have relevance for understanding similar cycles across human history, and may lend insight into some of the circumstances and challenges currently faced by our species.

Journal ArticleDOI
TL;DR: It is described how theoretical mechanisms for grouping and segmentation in cortical neural circuits can account for a wide variety of these long-range grouping effects and how the model does a good job explaining the key empirical findings.
Abstract: Investigations of visual crowding, where a target is difficult to identify because of flanking elements, has largely used a theoretical perspective based on local interactions where flanking elements pool with or substitute for properties of the target. This successful theoretical approach has motivated a wide variety of empirical investigations to identify mechanisms that cause crowding, and it has suggested practical applications to mitigate crowding effects. However, this theoretical approach has been unable to account for a parallel set of findings that crowding is influenced by long-range perceptual grouping effects. When the target and flankers are perceived as part of separate visual groups, crowding tends to be quite weak. Here, we describe how theoretical mechanisms for grouping and segmentation in cortical neural circuits can account for a wide variety of these long-range grouping effects. Building on previous work, we explain how crowding occurs in the model and explain how grouping in the model involves connected boundary signals that represent a key aspect of visual information. We then introduce new circuits that allow nonspecific top-down selection signals to flow along connected boundaries or within a surface contained by boundaries and thereby induce a segmentation that can separate the visual information corresponding to the flankers from the visual information corresponding to the target. When such segmentation occurs, crowding is shown to be weak. We compare the model's behavior to 5 sets of experimental findings on visual crowding and show that the model does a good job explaining the key empirical findings. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: A neural interpretation of exemplar theory is proposed in which category learning is mediated by synaptic plasticity at cortical-striatal synapses and alters connectivity between striatal neurons and neurons in sensory association cortex.
Abstract: Exemplar theory assumes that people categorize a novel object by comparing its similarity to the memory representations of all previous exemplars from each relevant category. Exemplar theory has been the most prominent cognitive theory of categorization for more than 30 years. Despite its considerable success in providing good quantitative fits to a wide variety of accuracy data, it has never had a detailed neurobiological interpretation. This article proposes a neural interpretation of exemplar theory in which category learning is mediated by synaptic plasticity at cortical-striatal synapses. In this model, categorization training does not create new memory representations, rather it alters connectivity between striatal neurons and neurons in sensory association cortex. The new model makes identical quantitative predictions as exemplar theory, yet it can account for many empirical phenomena that are either incompatible with or outside the scope of the cognitive version of exemplar theory. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: 5 perceptual double-pass experiments are described that show greater than chance agreement, which is inconsistent with models that assume internal variability alone, and provide the first behavioral evidence independent of model fits for trial-to-trial variability in drift rate in tasks used in examining perceptual decision-making.
Abstract: It is important to identify sources of variability in processing to understand decision-making in perception and cognition. There is a distinction between internal and external variability in processing, and double-pass experiments have been used to estimate their relative contributions. In these and our experiments, exact perceptual stimuli are repeated later in testing, and agreement on the 2 trials is examined to see if it is greater than chance. In recent research in modeling decision processes, some models implement only (internal) variability in the decision process whereas others explicitly represent multiple sources of variability. We describe 5 perceptual double-pass experiments that show greater than chance agreement, which is inconsistent with models that assume internal variability alone. Estimates of total trial-to-trial variability in the evidence accumulation (drift) rate (the decision-relevant stimulus information) were estimated from fits of the standard diffusion decision-making model to the data. The double-pass procedure provided estimates of how much of this total variability was systematic and dependent on the stimulus. These results provide the first behavioral evidence independent of model fits for trial-to-trial variability in drift rate in tasks used in examining perceptual decision-making. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: Results show that the representation of numerosity information used in discrimination tasks depends on the task and no single representation can account for the data from all the paradigms.
Abstract: Models of the representation of numerosity information used in discrimination tasks are integrated with a diffusion decision model. The representation models assume distributions of numerosity either with means and SD that increase linearly with numerosity or with means that increase logarithmically with constant SD. The models produce coefficients that are applied to differences between two numerosities to produce drift rates and these drive the decision process. The linear and log models make differential predictions about how response time (RT) distributions and accuracy change with numerosity and which model is successful depends on the task. When the task is to decide which of two side-by-side arrays of dots has more dots, the log model fits decreasing accuracy and increasing RT as numerosity increases. When the task is to decide, for dots of two colors mixed in a single array, which color has more dots, the linear model fits decreasing accuracy and decreasing RT as numerosity increases. For both tasks, variables such as the areas covered by the dots affect performance, but if the task is changed to one in which the subject has to decide whether the number of dots in a single array is more or less than a standard, the variables have little effect on performance. Model parameters correlate across tasks suggesting commonalities in the abilities to perform them. Overall, results show that the representation used depends on the task and no single representation can account for the data from all the paradigms. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: It is concluded that despite the popularity of dual-process accounts, current results from the argument evaluation task are best explained by a single-process account that incorporates separate decision thresholds for inductive and deductive inferences.
Abstract: Single-process accounts of reasoning propose that the same cognitive mechanisms underlie inductive and deductive inferences. In contrast, dual-process accounts propose that these inferences depend upon 2 qualitatively different mechanisms. To distinguish between these accounts, we derived a set of single-process and dual-process models based on an overarching signal detection framework. We then used signed difference analysis to test each model against data from an argument evaluation task, in which induction and deduction judgments are elicited for sets of valid and invalid arguments. Three data sets were analyzed: data from Singmann and Klauer (2011), a database of argument evaluation studies, and the results of an experiment designed to test model predictions. Of the large set of testable models, we found that almost all could be rejected, including all 2-dimensional models. The only testable model able to account for all 3 data sets was a model with 1 dimension of argument strength and independent decision criteria for induction and deduction judgments. We conclude that despite the popularity of dual-process accounts, current results from the argument evaluation task are best explained by a single-process account that incorporates separate decision thresholds for inductive and deductive inferences. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: Viewing addiction as a homeostatic reinforcement learning disorder coherently explains many behavioral and neurobiological aspects of the transition to cocaine addiction, and suggests a new perspective toward understanding addiction.
Abstract: Drug addiction implicates both reward learning and homeostatic regulation mechanisms of the brain. This has stimulated 2 partially successful theoretical perspectives on addiction. Many important aspects of addiction, however, remain to be explained within a single, unified framework that integrates the 2 mechanisms. Building upon a recently developed homeostatic reinforcement learning theory, the authors focus on a key transition stage of addiction that is well modeled in animals, escalation of drug use, and propose a computational theory of cocaine addiction where cocaine reinforces behavior due to its rapid homeostatic corrective effect, whereas its chronic use induces slow and long-lasting changes in homeostatic setpoint. Simulations show that our new theory accounts for key behavioral and neurobiological features of addiction, most notably, escalation of cocaine use, drug-primed craving and relapse, individual differences underlying dose-response curves, and dopamine D2-receptor downregulation in addicts. The theory also generates unique predictions about cocaine self-administration behavior in rats that are confirmed by new experimental results. Viewing addiction as a homeostatic reinforcement learning disorder coherently explains many behavioral and neurobiological aspects of the transition to cocaine addiction, and suggests a new perspective toward understanding addiction. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: Noise in the parameters that characterize an individual’s preferences can combine with noise in the response process to distort observed choice proportions, and core preferences can appear to display nonlinear probability weighting when perturbed by such noise.
Abstract: We examine the effects of multiple sources of noise in risky decision making Noise in the parameters that characterize an individual's preferences can combine with noise in the response process to distort observed choice proportions Thus, underlying preferences that conform to expected value maximization can appear to show systematic risk aversion or risk seeking Similarly, core preferences that are consistent with expected utility theory, when perturbed by such noise, can appear to display nonlinear probability weighting For this reason, modal choices cannot be used simplistically to infer underlying preferences Quantitative model fits that do not allow for both sorts of noise can lead to wrong conclusions (PsycINFO Database Record

Journal ArticleDOI
TL;DR: In a reanalysis of 29 data sets including more than 400,000 individual trials, noncompensatory choices of the recognized option were estimated to be slower than choices due to recognition-congruent knowledge, corroborates the parallel information-integration account of memory-based decisions, according to which decisions become faster when the coherence of the available information increases.
Abstract: When making inferences about pairs of objects, one of which is recognized and the other is not, the recognition heuristic states that participants choose the recognized object in a noncompensatory way without considering any further knowledge. In contrast, information-integration theories such as parallel constraint satisfaction (PCS) assume that recognition is merely one of many cues that is integrated with further knowledge in a compensatory way. To test both process models against each other without manipulating recognition or further knowledge, we include response times into the r-model, a popular multinomial processing tree model for memory-based decisions. Essentially, this response-time-extended r-model allows to test a crucial prediction of PCS, namely, that the integration of recognition-congruent knowledge leads to faster decisions compared to the consideration of recognition only-even though more information is processed. In contrast, decisions due to recognition-heuristic use are predicted to be faster than decisions affected by any further knowledge. Using the classical German-cities example, simulations show that the novel measurement model discriminates between both process models based on choices, decision times, and recognition judgments only. In a reanalysis of 29 data sets including more than 400,000 individual trials, noncompensatory choices of the recognized option were estimated to be slower than choices due to recognition-congruent knowledge. This corroborates the parallel information-integration account of memory-based decisions, according to which decisions become faster when the coherence of the available information increases. (PsycINFO Database Record