scispace - formally typeset
Search or ask a question

Showing papers in "Psychological Review in 2020"


Journal ArticleDOI
TL;DR: Efforts to increase women's participation in majority-male departments and companies would benefit from identifying and counteracting masculine defaults on multiple levels of organizational culture (i.e., ideas, institutional policies, interactions, individuals).
Abstract: Understanding and remedying women's underrepresentation in majority-male fields and occupations require the recognition of a lesser-known form of cultural bias called masculine defaults Masculine defaults exist when aspects of a culture value, reward, or regard as standard, normal, neutral, or necessary characteristics or behaviors associated with the male gender role Although feminist theorists have previously described and analyzed masculine defaults (eg, Bem, 1984; de Beauvoir, 1953; Gilligan, 1982; Warren, 1977), here we define masculine defaults in more detail, distinguish them from more well-researched forms of bias, and describe how they contribute to women's underrepresentation We additionally discuss how to counteract masculine defaults and possible challenges to addressing them Efforts to increase women's participation in majority-male departments and companies would benefit from identifying and counteracting masculine defaults on multiple levels of organizational culture (ie, ideas, institutional policies, interactions, individuals) (PsycInfo Database Record (c) 2020 APA, all rights reserved)

111 citations


Journal ArticleDOI
TL;DR: The Structured Event Memory model of event cognition is introduced, which accounts for human abilities in event segmentation, memory, and generalization and can infer event boundaries, learn event schemata, and use event knowledge to reconstruct past experience.
Abstract: Humans spontaneously organize a continuous experience into discrete events and use the learned structure of these events to generalize and organize memory. We introduce the Structured Event Memory (SEM) model of event cognition, which accounts for human abilities in event segmentation, memory, and generalization. SEM is derived from a probabilistic generative model of event dynamics defined over structured symbolic scenes. By embedding symbolic scene representations in a vector space and parametrizing the scene dynamics in this continuous space, SEM combines the advantages of structured and neural network approaches to high-level cognition. Using probabilistic reasoning over this generative model, SEM can infer event boundaries, learn event schemata, and use event knowledge to reconstruct past experience. We show that SEM can scale up to high-dimensional input spaces, producing human-like event segmentation for naturalistic video data, and accounts for a wide array of memory phenomena. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

108 citations


Journal ArticleDOI
TL;DR: A rational analysis of curiosity is presented by considering the computational problem underlying curiosity, which allows us to model these distinct accounts of curiosity in a common framework and suggests that previous theories need not be in contention but are special cases of a more general account of curiosity.
Abstract: Curiosity is considered to be the essence of science and an integral component of cognition. What prompts curiosity in a learner? Previous theoretical accounts of curiosity remain divided-novelty-based theories propose that new and highly uncertain stimuli pique curiosity, whereas complexity-based theories propose that stimuli with an intermediate degree of uncertainty stimulate curiosity. In this article, we present a rational analysis of curiosity by considering the computational problem underlying curiosity, which allows us to model these distinct accounts of curiosity in a common framework. Our approach posits that a rational agent should explore stimuli that maximally increase the usefulness of its knowledge and that curiosity is the mechanism by which humans approximate this rational behavior. Critically, our analysis show that the causal structure of the environment can determine whether curiosity is driven by either highly uncertain or moderately uncertain stimuli. This suggests that previous theories need not be in contention but are special cases of a more general account of curiosity. Experimental results confirm our predictions and demonstrate that our theory explains a wide range of findings about human curiosity, including its subjectivity and malleability. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

64 citations


Journal ArticleDOI
TL;DR: A novel analysis of preceding item strength is presented, in which it is shown that memory for an item is higher if during study it was preceded by a stronger item (e.g., a high frequency word).
Abstract: We present a review of frequency effects in memory, accompanied by a theory of memory, according to which the storage of new information in long-term memory (LTM) depletes a limited pool of working memory (WM) resources as an inverse function of item strength. We support the theory by showing that items with stronger representations in LTM (e.g., high frequency items) are easier to store, bind to context, and bind to one another; that WM resources are involved in storage and retrieval from LTM; that WM performance is better for stronger, more familiar stimuli. We present a novel analysis of preceding item strength, in which we show from nine existing studies that memory for an item is higher if during study it was preceded by a stronger item (e.g., a high frequency word). This effect is cumulative (the more prior items are of high frequency, the better), continuous (memory proportional to word frequency of preceding item), interacts with current item strength (larger for weaker items), and interacts with lag (decreases as the lag between the current and prior study item increases). A computational model that implements the theory is presented, which accounts for these effects. We discuss related phenomena that the model/theory can explain. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

51 citations


Journal ArticleDOI
TL;DR: An integrated model of word processing and eye-movement control during Chinese reading (CRM) is constructed and provides insights on how Chinese readers address some important challenges, such as word segmentation and saccade-target selection.
Abstract: In the Chinese writing system, there are no interword spaces to mark word boundaries. To understand how Chinese readers conquer this challenge, we constructed an integrated model of word processing and eye-movement control during Chinese reading (CRM). The model contains a word-processing module and an eye-movement control module. The word-processing module perceives new information within the perceptual span around a fixation. The model uses the interactive activation framework (McClelland & Rumelhart, 1981) to simulate word processing, but some new assumptions were made to address the word segmentation problem in Chinese reading. All the words supported by characters in the perceptual span are activated and they compete for a winner. When one word wins the competition, it is identified and it is simultaneously segmented from text. The eye-movement control module makes the decision regarding when and where to move the eyes using the activation information of word units and character units provided by the word-processing module. The model estimates how many characters can be processed during a fixation, and then makes a saccade to somewhere beyond this point. The model successfully simulated important findings on the relation between word processing and eye-movement control, how Chinese readers choose saccade targets, how Chinese readers segment words with ambiguous boundaries, and how Chinese readers process information with parafoveal vision during Chinese sentence reading. The current model thus provides insights on how Chinese readers address some important challenges, such as word segmentation and saccade-target selection. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

45 citations


Journal ArticleDOI
TL;DR: It is concluded that simultaneous consonance is a composite phenomenon that derives in large part from three phenomena: interference, periodicity/harmonicity, and cultural familiarity, and is formalized with a computational model that predicts a musical chord’s simultaneous consonances from these three features.
Abstract: Simultaneous consonance is a salient perceptual phenomenon corresponding to the perceived pleasantness of simultaneously sounding musical tones. Various competing theories of consonance have been proposed over the centuries, but recently a consensus has developed that simultaneous consonance is primarily driven by harmonicity perception. Here we question this view, substantiating our argument by critically reviewing historic consonance research from a broad variety of disciplines, reanalyzing consonance perception data from 4 previous behavioral studies representing more than 500 participants, and modeling three Western musical corpora representing more than 100,000 compositions. We conclude that simultaneous consonance is a composite phenomenon that derives in large part from three phenomena: interference, periodicity/harmonicity, and cultural familiarity. We formalize this conclusion with a computational model that predicts a musical chord's simultaneous consonance from these three features, and release this model in an open-source R package, incon, alongside 15 other computational models also evaluated in this paper. We hope that this package will facilitate further psychological and musicological research into simultaneous consonance. (PsycINFO Database Record (c) 2020 APA, all rights reserved).

44 citations


Journal ArticleDOI
TL;DR: It is proposed that infants can take other's perspectives because they have an altercentric bias, a combination of the value that human cognition places on others' attention, and an absence of a competing self-perspective which would create a conflict requiring resolution by Executive Functions.
Abstract: From early in life, human infants appear capable of taking others' perspectives, and can do so even when the other's perspective conflicts with the infant's perspective. Infants' success in perspective-taking contexts implies that they are managing conflicting perspectives despite a wealth of data suggesting that doing so relies on sufficiently mature Executive Functions, and is a challenge even for adults. In a new theory, I propose that infants can take other's perspectives because they have an altercentric bias. This bias results from a combination of the value that human cognition places on others' attention, and an absence of a competing self-perspective, which would, in older children, create a conflict requiring resolution by Executive Functions. A self-perspective emerges with the development of cognitive self-awareness, sometime in the second year of life, at which point it leads to competition between perspectives. This theory provides a way of explaining infants' ability to take others' perspectives, but raises the possibility that they could do so without representing or understanding the implications of perspective for others' mental states. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

42 citations


Journal ArticleDOI
TL;DR: Scientists need to coin new technical names for scientifically derived constructs-names precisely defined in terms of the constellation of features or components that characterize the constructs they denote, and the development of the kama muta construct illustrates one way to go about this.
Abstract: Vernacular lexemes appear self-evident, so we unwittingly reify them. But the words and phrases of natural languages comprise a treacherous basis for identifying valid psychological constructs, as I illustrate in emotion research. Like other vernacular lexemes, the emotion labels in natural languages do not have definite, stable, mutually transparent meanings, and any one vernacular word may be used to denote multiple scientifically distinct entities. In addition, the consequential choice of one lexeme to name a scientific construct rather than any of its partial synonyms is often arbitrary. Furthermore, a given vernacular lexeme from any one of the world's 7000 languages rarely maps one-to-one into an exactly corresponding vernacular lexeme in other languages. Words related to anger in different languages illustrate this. Since each language constitutes a distinct taxonomy of things in the world, most or all languages must fail to cut nature at its joints. In short, it is pernicious to use one language's dictionary as the source of psychological constructs. So scientists need to coin new technical names for scientifically derived constructs-names precisely defined in terms of the constellation of features or components that characterize the constructs they denote. The development of the kama muta construct illustrates one way to go about this. Kama muta is the emotion evoked by sudden intensification of communal sharing-universally experienced but not isomorphic with any vernacular lexeme such as heart warming, moving, touching, collective pride, tender, nostalgic, sentimental, Awww-so cute!. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

41 citations


Journal ArticleDOI
TL;DR: A computational model of obsessive-compulsive disorder is developed on the basis of the well-developed framework of the Bayesian brain, where a difficulty in relying on past events to predict the consequences of patients' own actions and the unfolding of possible events is proposed.
Abstract: In this article, we develop a computational model of obsessive-compulsive disorder (OCD). We propose that OCD is characterized by a difficulty in relying on past events to predict the consequences of patients' own actions and the unfolding of possible events. Clinically, this corresponds both to patients' difficulty in trusting their own actions (and therefore repeating them), and to their common preoccupation with unlikely chains of events. Critically, we develop this idea on the basis of the well-developed framework of the Bayesian brain, where this impairment is formalized as excessive uncertainty regarding state transitions. We illustrate the validity of this idea using quantitative simulations and use these to form specific empirical predictions. These predictions are evaluated in relation to existing evidence, and are used to delineate directions for future research. We show how seemingly unrelated findings and phenomena in OCD can be explained by the model, including a persistent experience that actions were not adequately performed and a tendency to repeat actions; excessive information gathering (i.e., checking); indecisiveness and pathological doubt; overreliance on habits at the expense of goal-directed behavior; and overresponsiveness to sensory stimuli, thoughts, and feedback. We discuss the relationship and interaction between our model and other prominent models of OCD, including models focusing on harm-avoidance, not-just-right experiences, or impairments in goal-directed behavior. Finally, we outline potential clinical implications and suggest lines for future research. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

38 citations


Journal ArticleDOI
TL;DR: This work argues that effect anticipation is the process responsible for dual task costs, and substantiates this suggestion with results from several lines of research.
Abstract: In many if not all situations humans are engaged in more than one activity at the same time, that is, they multitask. In laboratory situations, even the combination of two simple motor tasks generally yields performance decrements in one or both tasks, compared with corresponding single task conditions. In contemporary models of dual tasking, these dual task costs are attributed to a capacity-limited stage of mentally specifying required responses. Ideomotor theory suggests that the generation of responses is a process of specifying goals, that is, desired future perceptual states (= effect anticipation). Based on this, we argue that effect anticipation is the process responsible for dual task costs. We substantiate this suggestion with results from several lines of research, showing that (a) effect anticipation coincides with a capacity-limited process in dual task experiments, (b) no dual task costs arise if no effects are to be anticipated in one of the tasks, (c) dual task costs vary as a function of a how well effects from two tasks fit together, and (d) monitoring the occurrence of effects also adds additional costs. These results are discussed in a common framework and in relation to other observations and fields. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

37 citations


Journal ArticleDOI
TL;DR: A model of free-operant behavior in which goal-directed control is determined by the correlation between the rates of the action and the outcome whereas the total prediction error generated by contiguous reinforcement by the outcome controls habitual response strength is presented.
Abstract: Contemporary theories of instrumental performance assume that responding can be controlled by 2 behavioral systems, 1 goal-directed that encodes the outcome of an action, and 1 habitual that reinforces the response strength of the same action. Here we present a model of free-operant behavior in which goal-directed control is determined by the correlation between the rates of the action and the outcome whereas the total prediction error generated by contiguous reinforcement by the outcome controls habitual response strength. The outputs of these two systems summate to generate a total response strength. This cooperative model addresses the difference in the behavioral impact of ratio and interval schedules, the transition from goal-directed to habitual control with extended training, the persistence of goal-directed control under choice procedures and following extinction, among other phenomena. In these respects, this dual-system model is unique in its account of free-operant behavior. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: It is shown that this theory can explain why and when people underreact to the data or the prior, and a new experiment demonstrates that these two forms of underreaction can be systematically controlled by manipulating the query distribution.
Abstract: Bayesian theories of cognition assume that people can integrate probabilities rationally. However, several empirical findings contradict this proposition: human probabilistic inferences are prone to systematic deviations from optimality. Puzzlingly, these deviations sometimes go in opposite directions. Whereas some studies suggest that people underreact to prior probabilities (base rate neglect), other studies find that people underreact to the likelihood of the data (conservatism). We argue that these deviations arise because the human brain does not rely solely on a general-purpose mechanism for approximating Bayesian inference that is invariant across queries. Instead, the brain is equipped with a recognition model that maps queries to probability distributions. The parameters of this recognition model are optimized to get the output as close as possible, on average, to the true posterior. Because of our limited computational resources, the recognition model will allocate its resources so as to be more accurate for high probability queries than for low probability queries. By adapting to the query distribution, the recognition model learns to infer. We show that this theory can explain why and when people underreact to the data or the prior, and a new experiment demonstrates that these two forms of underreaction can be systematically controlled by manipulating the query distribution. The theory also explains a range of related phenomena: memory effects, belief bias, and the structure of response variability in probabilistic reasoning. We also discuss how the theory can be integrated with prior sampling-based accounts of approximate inference. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: A theory of collective learning is offered, wherein the cognitive capacity of collective attention indicates and represents common knowledge across group members, yielding mutually known representations, emotions, evaluations, and beliefs.
Abstract: The study of observational learning, or learning from others, is a cornerstone of the behavioral sciences, because it grounds the continuity, diversity, and innovation inherent to humanity's cultural repertoire within the social learning capacities of individual humans. In contrast, collective learning, or learning with others, has been underappreciated in terms of its importance to human cognition, cohesion, and culture. We offer a theory of collective learning, wherein the cognitive capacity of collective attention indicates and represents common knowledge across group members, yielding mutually known representations, emotions, evaluations, and beliefs. By enhancing the comprehension of and cohesion with fellow group members, collective attention facilitates communication, remembering, and problem-solving in human groups. We also discuss the implications of collective learning theory for the development of collective identities, social norms, and strategic cooperation. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: The authors argue that current NLP systems are fairly successful models of human word similarity, but they fall short in many other respects, such as being too strongly linked to text-based patterns in large corpora, and too weakly linked to the desires, goals, and beliefs that people express through words.
Abstract: Machines have achieved a broad and growing set of linguistic competencies, thanks to recent progress in Natural Language Processing (NLP). Psychologists have shown increasing interest in such models, comparing their output to psychological judgments such as similarity, association, priming, and comprehension, raising the question of whether the models could serve as psychological theories. In this article, we compare how humans and machines represent the meaning of words. We argue that contemporary NLP systems are fairly successful models of human word similarity, but they fall short in many other respects. Current models are too strongly linked to the text-based patterns in large corpora, and too weakly linked to the desires, goals, and beliefs that people express through words. Word meanings must also be grounded in perception and action and be capable of flexible combinations in ways that current systems are not. We discuss promising approaches to grounding NLP systems and argue that they will be more successful, with a more human-like, conceptual basis for word meaning. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: The Bayesian sampler model as discussed by the authors trades off the coherence of probabilistic judgments for improved accuracy, and provides a single framework for explaining phenomena associated with diverse biases and heuristics such as conservatism and the conjunction fallacy.
Abstract: Human probability judgments are systematically biased, in apparent tension with Bayesian models of cognition. But perhaps the brain does not represent probabilities explicitly, but approximates probabilistic calculations through a process of sampling, as used in computational probabilistic models in statistics. Naive probability estimates can be obtained by calculating the relative frequency of an event within a sample, but these estimates tend to be extreme when the sample size is small. We propose instead that people use a generic prior to improve the accuracy of their probability estimates based on samples, and we call this model the Bayesian sampler. The Bayesian sampler trades off the coherence of probabilistic judgments for improved accuracy, and provides a single framework for explaining phenomena associated with diverse biases and heuristics such as conservatism and the conjunction fallacy. The approach turns out to provide a rational reinterpretation of "noise" in an important recent model of probability judgment, the probability theory plus noise model (Costello & Watts, 2014, 2016a, 2017; Costello & Watts, 2019; Costello, Watts, & Fisher, 2018), making equivalent average predictions for simple events, conjunctions, and disjunctions. The Bayesian sampler does, however, make distinct predictions for conditional probabilities and distributions of probability estimates. We show in 2 new experiments that this model better captures these mean judgments both qualitatively and quantitatively; which model best fits individual distributions of responses depends on the assumed size of the cognitive sample. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: This work proposes an interactive activation framework for value-based decision making that does not assume that objective function maximization is the only consideration affecting choice or that processing is modular or serial, and invites consideration of the possibility that choice is emergent and that its computation is distributed.
Abstract: Prominent theories of value-based decision making have assumed that choices are made via the maximization of some objective function (e.g., expected value) and that the process of decision making is serial and unfolds across modular subprocesses (e.g., perception, valuation, and action selection). However, the influence of a large number of contextual variables that are not related to expected value in any direct way and the ubiquitous reciprocity among variables thought to belong to different subprocesses suggest that these assumptions may not always hold. Here, we propose an interactive activation framework for value-based decision making that does not assume that objective function maximization is the only consideration affecting choice or that processing is modular or serial. Our framework holds that processing takes place via the interactive propagation of activation in a set of simple, interconnected processing elements. We use our framework to simulate a broad range of well-known empirical phenomena-primarily focusing on decision contexts that feature nonoptimal decision making and/or interactive (i.e., not serial or modular) processing. Our approach is constrained at Marr's (1982) algorithmic and implementational levels rather than focusing strictly on considerations of optimality at the computational theory level. It invites consideration of the possibility that choice is emergent and that its computation is distributed. (PsycINFO Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: The dual accumulator model accounts for a variety of behavioral patterns, including limited iterated reasoning, payoff sensitivity, consideration of risk-reward tradeoffs, and salient label effects, and it provides a good quantitative fit to existing behavioral data.
Abstract: What are the mental operations involved in game theoretic decision making? How do players deliberate (intelligently, but perhaps imperfectly) about strategic interdependencies and ultimately decide on a strategy? We address these questions using an evidence accumulation model, with bidirectional connections between preferences for the strategies available to the decision maker and beliefs regarding the opponent's choices. Our dual accumulator model accounts for a variety of behavioral patterns, including limited iterated reasoning, payoff sensitivity, consideration of risk-reward tradeoffs, and salient label effects, and it provides a good quantitative fit to existing behavioral data. In a comparison with other popular behavioral game theoretic models fit at the individual subject level to choices across a set of games, the dual accumulator model makes the most accurate out-of-sample predictions. Additionally, as a cognitive-process model, it can also be used to make predictions about response time patterns, time pressure effects, and attention during deliberation. Stochastic sampling and dynamic accumulation, cognitive mechanisms foundational to decision making, play a critical role in explaining well-known behavioral patterns as well as in generating novel predictions. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: This conceptualization fully accounts for the known effects of signal features and response modalities traditionally used across the countermanding literature, and casts different light on the concept of top-down inhibition, its timing and neural underpinning, as well as the interpretation of stop-signal reaction time (RT).
Abstract: Countermanding behavior has long been seen as a cornerstone of executive control—the human ability to selectively inhibit undesirable responses and change plans. However, scattered evidence implies that stopping behavior is entangled with simpler automatic stimulus-response mechanisms. Here we operationalize this idea by merging the latest conceptualization of saccadic countermanding with a neural network model of visuo-oculomotor behavior that integrates bottom-up and top-down drives. This model accounts for all fundamental qualitative and quantitative features of saccadic countermanding, including neuronal activity. Importantly, it does so by using the same architecture and parameters as basic visually guided behavior and automatic stimulus-driven interference. Using simulations and new data, we compare the temporal dynamics of saccade countermanding with that of saccadic inhibition (SI), a hallmark effect thought to reflect automatic competition within saccade planning areas. We demonstrate how SI accounts for a large proportion of the saccade countermanding process when using visual signals. We conclude that top-down inhibition acts later, piggy-backing on the quicker automatic inhibition. This conceptualization fully accounts for the known effects of signal features and response modalities traditionally used across the countermanding literature. Moreover, it casts different light on the concept of top-down inhibition, its timing and neural underpinning, as well as the interpretation of stop-signal reaction time (RT), the main behavioral measure in the countermanding literature.

Journal ArticleDOI
TL;DR: Under a variety of internal statistical structures, this surprisingly simple decision heuristic predicts dissociations of objective decision and subjective metacognition, which have been empirically observed and provides a tentative account of some behavioral features of blindsight.
Abstract: Psychophysical studies on confidence construction are often grounded in bidimensional signal detection theory (SDT) and its relatives. However, these studies often stand on oversimplified assumptions of (a) bidimensional variance-equality and (b) bidimensional statistical independence. The present study simulated 2-alternative forced-choice and confidence rating performances, incorporating more empirically plausible variance-covariance structures. One prominent observation is that superior metacognitive accuracy can be achieved when one applies a heuristic in which the response-incongruent dimension of information is ignored. This is because such an heuristic takes advantage of the specific unequal-variance structure, which paradoxically cannot be easily exploited if both dimensions are evaluated together. Furthermore, under a variety of internal statistical structures, this simple heuristic predicts dissociations of objective decision and subjective metacognition, which have been empirically observed. Also, it provides a tentative account of some behavioral features of blindsight. Therefore, this surprisingly simple decision heuristic may inspire novel perspectives on metacognition and consciousness. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: The ways in which achievement motivation, need for power, and need for relational security are related to decision under risk are examined, and the broader implications of this motivational framework of risk preference are discussed.
Abstract: When and why do people choose a more or a less risky option? To answer this question, we propose that it is essential to examine the dynamic interrelations among three factors-the decision maker's goal (e.g., promotion vs. prevention goal), the current value state (e.g., the domain of gains vs. losses), and the choice set (i.e., perceived available options). We review previous theories that highlight the significance of each of these three factors. We then propose a motivational framework of risk preference that describes how these three factors work together motivationally to impact risk preference, illustrated by evidence from regulatory focus research. We then draw on this new motivational framework to examine the ways in which achievement motivation, need for power, and need for relational security are related to decision under risk, and discuss the broader implications of this motivational framework of risk preference. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

Journal ArticleDOI
TL;DR: A simple model involving reciprocal associations between the CS and US (VCS-US and VUS-CS) that simulates these qualitative individual differences in conditioned behavior is developed and enables a broad range of phenomena to be accommodated.
Abstract: Associative treatments of how Pavlovian conditioning affects conditioned behavior are rudimentary: A simple ordinal mapping is held to exist between the strength of an association (V) between a conditioned stimulus (CS) and an unconditioned stimulus (US; i.e., VCS-US) and conditioned behavior in a given experimental preparation. The inadequacy of this simplification is highlighted by recent studies that have taken multiple measures of conditioned behavior: Different measures of conditioned behavior provide the basis for drawing opposite conclusions about VCS-US across individual animals. Here, we develop a simple model involving reciprocal associations between the CS and US (VCS-US and VUS-CS) that simulates these qualitative individual differences in conditioned behavior. The new model, HeiDI (How excitation and inhibition Determine Ideo-motion), enables a broad range of phenomena to be accommodated, which are either beyond the scope of extant models or require them to appeal to additional (learning) processes. It also provides an impetus for new lines of inquiry and generates novel predictions.

Journal ArticleDOI
TL;DR: It is shown that the advantage LBA provides a tractable new avenue for understanding the dynamics of decisions among multiple choices, and provides a detailed quantitative account of a variety of benchmark binary and multiple choice phenomena that traditional independent accumulator models struggle with.
Abstract: Independent racing evidence-accumulator models have proven fruitful in advancing understanding of rapid decisions, mainly in the case of binary choice, where they can be relatively easily estimated and are known to account for a range of benchmark phenomena. Typically, such models assume a one-to-one mapping between accumulators and responses. We explore an alternative independent-race framework where more than one accumulator can be associated with each response, and where a response is triggered when a sufficient number of accumulators associated with that response reach their thresholds. Each accumulator is primarily driven by the difference in evidence supporting one versus another response (i.e., that response's "advantage"), with secondary inputs corresponding to the total evidence for both responses and a constant term. We use Brown and Heathcote's (2008) linear ballistic accumulator (LBA) to instantiate the framework in a mathematically tractable measurement model (i.e., a model whose parameters can be successfully recovered from data). We show this "advantage LBA" model provides a detailed quantitative account of a variety of benchmark binary and multiple choice phenomena that traditional independent accumulator models struggle with; in binary choice the effects of additive versus multiplicative changes to input values, and in multiple choice the effects of manipulations of the strength of lure (i.e., nontarget) stimuli and Hick's law. We conclude that the advantage LBA provides a tractable new avenue for understanding the dynamics of decisions among multiple choices. (PsycINFO Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: The positive emotion amplification (PE-AMP) model of AN describes a dynamic interplay between biologically based enhanced reward responding and cognitive-behavioral factors that amplify positive emotion, resulting in positive feedback cycles that motivate and reinforce weight loss behavior during the AN onset phase.
Abstract: The role of positive emotion in anorexia nervosa (AN) has been underappreciated in both theory and treatment. Yet, people with AN demonstrate high motivation for and sustained effort toward weight loss, achieving success to an extreme beyond the capability of most people. Positive emotion dysregulation may facilitate and reinforce such efforts. The positive emotion amplification (PE-AMP) model of AN describes a dynamic interplay between biologically based enhanced reward responding and cognitive-behavioral factors that amplify positive emotion, resulting in positive feedback cycles that motivate and reinforce weight loss behavior during the AN onset phase. These experiences subvert the pursuit of happiness by providing artificial senses of autonomy, competency, and relatedness to others (self-determination theory; Ryan & Deci, 2000) that provide a stark contrast to an otherwise negative emotional environment, resulting in the emergence and persistence of AN psychopathology as a self-sustaining sense of purpose. Ultimately, negative emotion, PE dysregulation, and artificial self-determination threats continue to drive AN behavior during the AN maintenance phase, pushing patients toward a genuine self-determination breakdown that can lead to hospitalization, health crises, relational strife and diminished quality of life, or even manifest in suicidal behavior. Future research directions and novel methodological approaches inspired by the PE-AMP model are discussed, as are important treatment implications for addressing this highly treatment-resistant disorder. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: A new computational model is formulating that assumes an initial bias or anchor that depends on type of price task (buying, selling, or certainty equivalents) and a stochastic evaluation accumulation process that depend on gamble attributes and provides a superior account of the distributional and dynamic properties of price.
Abstract: Theories that describe how people assign prices and make choices are typically based on the idea that both of these responses are derived from a common static, deterministic function used to assign utilities to options. However, preference reversals-where prices assigned to gambles conflict with preference orders elicited through binary choices-indicate that the response processes underlying these different methods of evaluation are more intricate. We address this issue by formulating a new computational model that assumes an initial bias or anchor that depends on type of price task (buying, selling, or certainty equivalents) and a stochastic evaluation accumulation process that depends on gamble attributes. To test this new model, we investigated choices and prices for a wide range of gambles and price tasks, including pricing under time pressure. In line with model predictions, we found that price distributions possessed stark skew that depended on the type of price and the attributes of gambles being considered. Prices were also sensitive to time pressure, indicating a dynamic evaluation process underlying price generation. The model out-performed prospect theory in predicting prices and additionally predicted the response times associated with these prices, which no prior model has accomplished. Finally, we show that the model successfully predicts out-of-sample choices and that its parameters allow us to fit choice response times as well. This price accumulation model therefore provides a superior account of the distributional and dynamic properties of price, leveraging process-level mechanisms to provide a more complete account of the valuation processes common across multiple methods of eliciting preference. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: It is demonstrated that although estimation accuracy is generally impaired when conditioned on only a single high-level interpretation, the reduction is not uniform across the entire feature range, and a top-down inference strategy that solely relies on the most likely high- level interpretation can be favorable with regard to late noise and more holistic performance metrics.
Abstract: Humans have the tendency to commit to a single interpretation of what has caused some observed evidence rather than considering all possible alternatives. This tendency can explain various forms of biases in cognition and perception. However, committing to a single high-level interpretation seems short-sighted and irrational, and thus it is unclear why humans are motivated to use such strategy. In a first step toward answering this question, we systematically quantified how this strategy affects estimation accuracy at the feature level in the context of 2 common hierarchical inference tasks, category-based perception and causal cue combination. Using model simulations, we demonstrate that although estimation accuracy is generally impaired when conditioned on only a single high-level interpretation, the reduction is not uniform across the entire feature range. Compared with a full inference strategy that considers all high-level interpretations, accuracy is only worse for feature values relatively close to the decision boundaries but is better everywhere else. That is, for feature values for which an observer has a reasonably high chance of being correct about the high-level interpretation of the feature, a full commitment to that particular interpretation is advantageous. We also show that conditioning on an preceding high-level interpretation provides an effective mechanism for partially protecting the evidence from corruption with late noise in the inference process (e.g., during retention in and recall from working memory). Our results suggest that a top-down inference strategy that solely relies on the most likely high-level interpretation can be favorable with regard to late noise and more holistic performance metrics. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: This model is the first unified account of how similarity affects associative encoding and recognition, including why studied pairs consisting of similar items are easier to recognize, why it is easy to reject novel pairs that recombine items that were studied alongside similar items, and why there is an early bias to falsely recognize novel pairs containing similar items that is later suppressed.
Abstract: We present a model of the encoding of episodic associations between items, extending the dynamic approach to retrieval and decision making of Cox and Shiffrin (2017) to the dynamics of encoding. This model is the first unified account of how similarity affects associative encoding and recognition, including why studied pairs consisting of similar items are easier to recognize, why it is easy to reject novel pairs that recombine items that were studied alongside similar items, and why there is an early bias to falsely recognize novel pairs consisting of similar items that is later suppressed (Dosher, 1984; Dosher & Rosedale, 1991). Items are encoded by sampling features into limited-capacity parallel channels in working memory. Associations are encoded by conjoining features across these channels. Because similar items have common features, their channels are correlated which increases the capacity available to encode associative information. The model additionally accounts for data from a new experiment illustrating the importance of similarity for associative encoding across a variety of stimulus types (objects, words, and abstract forms) and types of similarity (perceptual or conceptual), illustrating the generality of the model. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: A neurobiologically informed computational model of phasic dopamine signaling to account for a wide range of findings, including many considered inconsistent with the simple reward prediction error (RPE) formalism, providing a well-validated framework for understanding phasIC dopamine signaling.
Abstract: We describe a neurobiologically informed computational model of phasic dopamine signaling to account for a wide range of findings, including many considered inconsistent with the simple reward prediction error (RPE) formalism. The central feature of this PVLV framework is a distinction between a primary value (PV) system for anticipating primary rewards (Unconditioned Stimuli [USs]), and a learned value (LV) system for learning about stimuli associated with such rewards (CSs). The LV system represents the amygdala, which drives phasic bursting in midbrain dopamine areas, while the PV system represents the ventral striatum, which drives shunting inhibition of dopamine for expected USs (via direct inhibitory projections) and phasic pausing for expected USs (via the lateral habenula). Our model accounts for data supporting the separability of these systems, including individual differences in CS-based (sign-tracking) versus US-based learning (goal-tracking). Both systems use competing opponent-processing pathways representing evidence for and against specific USs, which can explain data dissociating the processes involved in acquisition versus extinction conditioning. Further, opponent processing proved critical in accounting for the full range of conditioned inhibition phenomena, and the closely related paradigm of second-order conditioning. Finally, we show how additional separable pathways representing aversive USs, largely mirroring those for appetitive USs, also have important differences from the positive valence case, allowing the model to account for several important phenomena in aversive conditioning. Overall, accounting for all of these phenomena strongly constrains the model, thus providing a well-validated framework for understanding phasic dopamine signaling. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: Konovalova and Le Mens as mentioned in this paper benefited from financial support from Southern Denmark University and Grants PSI2013-41909-P, #AEI/FEDER UE-PSI2016-75353, Ramon y Cajal Fellowship (RYC-2014-15035) from the Spanish MINECO, Grant IN[15]_EFG_ECO_2281 from the BBVA Foundation and ERC Consolidator #772268 from the European Commission.
Abstract: Le Mens benefited from financial support from Southern Denmark University and Grants PSI2013-41909-P, #AEI/FEDER UE-PSI2016-75353, Ramon y Cajal Fellowship (RYC-2014-15035) from the Spanish MINECO, Grant IN[15]_EFG_ECO_2281 from the BBVA Foundation and ERC Consolidator #772268 from the European Commission. E. Konovalova was funded by Spanish MINECO Grant PSI2013-41909-P to G. Le Mens.

Journal ArticleDOI
TL;DR: A more physiologically grounded model based on the tuning of a large set of neurons recorded in macaque V1 is examined and it is shown that key predictions of the idealized model are preserved and estimates of variability obtained by the normal-plus-uniform mixture method are bounded.
Abstract: Observers reproducing elementary visual features from memory after a short delay produce errors consistent with the encoding-decoding properties of neural populations. While inspired by electrophysiological observations of sensory neurons in cortex, the population coding account of these errors is based on a mathematical idealization of neural response functions that abstracts away most of the heterogeneity and complexity of real neuronal populations. Here we examine a more physiologically grounded model based on the tuning of a large set of neurons recorded in macaque V1 and show that key predictions of the idealized model are preserved. Both models predict long-tailed distributions of error when memory resources are taxed, as observed empirically in behavioral experiments and commonly approximated with a mixture of normal and uniform error components. Specifically, for an idealized homogeneous neural population, the width of the fitted normal distribution cannot exceed the average tuning width of the component neurons, and this also holds to a good approximation for more biologically realistic populations. Examining eight published studies of orientation recall, we find a consistent pattern of results suggestive of a median tuning width of approximately 20°, which compares well with neurophysiological observations. The finding that estimates of variability obtained by the normal-plus-uniform mixture method are bounded from above leads us to reevaluate previous studies that interpreted a saturation in width of the normal component as evidence for fundamental limits on the precision of perception, working memory, and long-term memory. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: A comparison of the precise quantitative predictions of these models through Bayes factors is performed, using probability density approximation to generate a pseudolikelihood estimate of the unknown probability density function, and thermodynamic integration via differential evolution to approximate the analytically intractable Baye factors.
Abstract: Conflict tasks are one of the most widely studied paradigms within cognitive psychology, where participants are required to respond based on relevant sources of information while ignoring conflicting irrelevant sources of information. The flanker task, in particular, has been the focus of considerable modeling efforts, with only 3 models being able to provide a complete account of empirical choice response time distributions: the dual-stage 2-phase model (DSTP), the shrinking spotlight model (SSP), and the diffusion model for conflict tasks (DMC). Although these models are grounded in different theoretical frameworks, can provide diverging measures of cognitive control, and are quantitatively distinguishable, no previous study has compared all 3 of these models in their ability to account for empirical data. Here, we perform a comparison of the precise quantitative predictions of these models through Bayes factors, using probability density approximation to generate a pseudolikelihood estimate of the unknown probability density function, and thermodynamic integration via differential evolution to approximate the analytically intractable Bayes factors. We find that for every participant across 3 data sets from 3 separate research groups, DMC provides an inferior account of the data to DSTP and SSP, which has important theoretical implications regarding cognitive processes engaged in the flanker task, and practical implications for applying the models to flanker data. More generally, we argue that our combination of probability density approximation with marginal likelihood approximation-which we term pseudolikelihood Bayes factors-provides a crucial step forward for the future of model comparison, where Bayes factors can be calculated between any models that can be simulated. We also discuss the limitations of simulation-based methods, such as the potential for approximation error, and suggest that researchers should use analytically or numerically computed likelihood functions when they are available and computationally tractable.