scispace - formally typeset
Search or ask a question

Showing papers in "Psychological Review in 2021"


Journal ArticleDOI
TL;DR: Five models engage in adversarial collaboration, to identify common conceptual ground, ongoing controversies, and continuing agendas, with emerging insight that the nature and number of targets each of these models typically examines alters perceivers' evaluative goal and how bottom-up information or top-down inferences interact.
Abstract: Social evaluation occurs at personal, interpersonal, group, and intergroup levels, with competing theories and evidence. Five models engage in adversarial collaboration, to identify common conceptual ground, ongoing controversies, and continuing agendas: Dual Perspective Model (Abele & Wojciszke, 2007); Behavioral Regulation Model (Leach, Ellemers, & Barreto, 2007); Dimensional Compensation Model (Yzerbyt et al., 2005); Stereotype Content Model (Fiske, Cuddy, Glick, & Xu, 2002); and Agency-Beliefs-Communion Model (Koch, Imhoff, Dotsch, Unkelbach, & Alves, 2016). Each has distinctive focus, theoretical roots, premises, and evidence. Controversies dispute dimensions: number, organization, definition, and labeling; their relative priority; and their relationship. Our first integration suggests 2 fundamental dimensions: Vertical (agency, competence, "getting ahead") and Horizontal (communion, warmth, "getting along"), with respective facets of ability and assertiveness (Vertical) and friendliness and morality (Horizontal). Depending on context, a third dimension is conservative versus progressive Beliefs. Second, different criteria for priority favor different dimensions: processing speed and subjective weight (Horizontal); pragmatic diagnosticity (Vertical); moderators include number and type of target, target-perceiver relationship, context. Finally, the relation between dimensions has similar operational moderators. As an integrative framework, the dimensions' dynamics also depend on perceiver goals (comprehension, efficiency, harmony, compatibility), each balancing top-down and bottom-up processes, for epistemic or hedonic functions. One emerging insight is that the nature and number of targets each of these models typically examines alters perceivers' evaluative goal and how bottom-up information or top-down inferences interact. This framework benefits theoretical parsimony and new research. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

100 citations


Journal ArticleDOI
TL;DR: It is strongly recommended that psychological scientists and neuroscientists reject the language of impulsivity in favor of a specific focus on the several well-defined and empirically supported factors that impulsivity is purported to cover.
Abstract: We demonstrate through theoretical, empirical, and sociocultural evidence that the concept of impulsivity fails the basic requirements of a psychological construct and should be rejected as such. Impulsivity (or impulsiveness) currently holds a central place in psychological theory, research, and clinical practice and is considered a multifaceted concept. However, impulsivity falls short of the theoretical specifications for hypothetical constructs by having meaning that is not compatible with psychometric, neuroscience, and clinical data. Psychometric findings indicate that impulsive traits and behaviors (e.g., response inhibition, delay discounting) are largely uncorrelated and fail to load onto a single, superordinate latent variable. Modern neuroscience has also failed to identify a specific and central neurobehavioral mechanism underlying impulsive behaviors and instead has found separate neurochemical systems and loci that contribute to a variety of impulsivity types. Clinically, these different impulsivity types show diverging and distinct pathways and processes relating to behavioral and psychosocial health. The predictive validity and sensitivity of impulsivity measures to pharmacological, behavioral, and cognitive interventions also vary based on the impulsivity type evaluated and clinical condition examined. Conflation of distinct personality and behavioral mechanisms under a single umbrella of impulsivity ultimately increases the likelihood of misunderstanding at a sociocultural level and facilitates misled hypothesizing and artificial inconsistencies for clinical translation. We strongly recommend that, based on this comprehensive evidence, psychological scientists and neuroscientists reject the language of impulsivity in favor of a specific focus on the several well-defined and empirically supported factors that impulsivity is purported to cover. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

83 citations


Journal ArticleDOI
TL;DR: It is shown that, contrary to what is typically assumed, metacognitive inefficiency depends on the level of confidence, and this findings establish an empirically validated model of confidence generation, have significant implications about measures of metac cognitive ability, and begin to reveal the underlying nature of meetac cognitive inefficiency.
Abstract: Humans have the metacognitive ability to judge the accuracy of their own decisions via confidence ratings. A substantial body of research has demonstrated that human metacognition is fallible but it remains unclear how metacognitive inefficiency should be incorporated into a mechanistic model of confidence generation. Here we show that, contrary to what is typically assumed, metacognitive inefficiency depends on the level of confidence. We found that, across 5 different data sets and 4 different measures of metacognition, metacognitive ability decreased with higher confidence ratings. To understand the nature of this effect, we collected a large dataset of 20 subjects completing 2,800 trials each and providing confidence ratings on a continuous scale. The results demonstrated a robustly nonlinear zROC curve with downward curvature, despite a decades-old assumption of linearity. This pattern of results was reproduced by a new mechanistic model of confidence generation, which assumes the existence of lognormally distributed metacognitive noise. The model outperformed competing models either lacking metacognitive noise altogether or featuring Gaussian metacognitive noise. Further, the model could generate a measure of metacognitive ability which was independent of confidence levels. These findings establish an empirically validated model of confidence generation, have significant implications about measures of metacognitive ability, and begin to reveal the underlying nature of metacognitive inefficiency. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

37 citations


Journal ArticleDOI
TL;DR: A psychological model of extremism based on the concept of motivational imbalance whereby a given need gains dominance and overrides other basic concerns is presented, which considers the model's implications for further research and explores the tradeoffs between extremism and moderation.
Abstract: We present a psychological model of extremism based on the concept of motivational imbalance whereby a given need gains dominance and overrides other basic concerns. In contrast, moderation results from a motivational balance wherein individuals' different needs are equitably attended to. Importantly, under moderation the different needs constrain individuals' behaviors in prohibiting actions that serve some needs yet undermine others. Those constraints are relaxed under motivational imbalance where the dominant need crowds out alternative needs. As a consequence, the constraints that the latter needs exercise upon behavior are relaxed, permitting previously avoided activities to take place. Because enactment of these behaviors sacrifices common concerns, most people avoid them, hence their designation as extreme. The state of need imbalance has motivational, cognitive, behavioral, affective and social consequences. These pertain to a variety of different extremisms that share the same psychological core: extreme diets, extreme sports, extreme infatuations, diverse addictions, as well as violent extremism. Evidence for the present model cuts across different domains of psychological phenomena, levels of behavioral analysis and phylogeny. We consider the model's implications for further research and explore the tradeoffs between extremism and moderation. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

36 citations


Journal ArticleDOI
TL;DR: It is argued that blindsight is severely and qualitatively degraded but nonetheless conscious vision, unacknowledged due to conservative response biases, and a powerful positive case for the qualitatively degradation conscious vision hypothesis is presented.
Abstract: Blindsight is a neuropsychological condition defined by residual visual function following destruction of primary visual cortex. This residual visual function is almost universally held to include capacities for voluntary discrimination in the total absence of awareness. So conceived, blindsight has had an enormous impact on the scientific study of consciousness. It is held to reveal a dramatic disconnect between performance and awareness and used to motivate diverse claims concerning the neural and cognitive basis of consciousness. Here I argue that this orthodox understanding of blindsight is fundamentally mistaken. Drawing on models from signal detection theory in conjunction with a wide range of behavioral and first-person evidence, I contend that blindsight is severely and qualitatively degraded but nonetheless conscious vision, unacknowledged due to conservative response biases. Psychophysical and functional arguments to the contrary are answered. A powerful positive case for the qualitatively degraded conscious vision hypothesis is then presented, detailing a set of distinctive predictions borne out by the data. Such data are further used to address the question of what it is like to have blindsight, as well as to explain the conservative and selectively unstable response criteria exhibited by blindsight subjects. On the view defended, blindsight does not reveal any dissociation between performance and awareness, nor does it speak to the neural or cognitive requirements for consciousness. A foundation stone of consciousness science requires radical reconsideration. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

36 citations


Journal ArticleDOI
TL;DR: This paper introduced the counterfactual simulation model (CSM) which predicts causal judgments in physical settings by comparing what actually happened with what would have happened in relevant counterfactually situations.
Abstract: How do people make causal judgments about physical events? We introduce the counterfactual simulation model (CSM) which predicts causal judgments in physical settings by comparing what actually happened with what would have happened in relevant counterfactual situations. The CSM postulates different aspects of causation that capture the extent to which a cause made a difference to whether and how the outcome occurred, and whether the cause was sufficient and robust. We test the CSM in several experiments in which participants make causal judgments about dynamic collision events. A preliminary study establishes a very close quantitative mapping between causal and counterfactual judgments. Experiment 1 demonstrates that counterfactuals are necessary for explaining causal judgments. Participants' judgments differed dramatically between pairs of situations in which what actually happened was identical, but where what would have happened differed. Experiment 2 features multiple candidate causes and shows that participants' judgments are sensitive to different aspects of causation. The CSM provides a better fit to participants' judgments than a heuristic model which uses features based on what actually happened. We discuss how the CSM can be used to model the semantics of different causal verbs, how it captures related concepts such as physical support, and how its predictions extend beyond the physical domain. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

30 citations


Journal ArticleDOI
TL;DR: In this article, the authors present empirical evidence that happiness, meaning, and psychological richness are related but distinct and desirable aspects of a good life, with unique causes and correlates, and report evidence that people leading psychologically rich lives tend to be more curious, think more holistically, and lean more politically liberal.
Abstract: Psychological science has typically conceptualized a good life in terms of either hedonic or eudaimonic well-being. We propose that psychological richness is another, neglected aspect of what people consider a good life. Unlike happy or meaningful lives, psychologically rich lives are best characterized by a variety of interesting and perspective-changing experiences. We present empirical evidence that happiness, meaning, and psychological richness are related but distinct and desirable aspects of a good life, with unique causes and correlates. In doing so, we show that a nontrivial number of people around the world report they would choose a psychologically rich life at the expense of a happy or meaningful life, and that approximately a third say that undoing their life's biggest regret would have made their lives psychologically richer. Furthermore, we propose that the predictors of a psychologically rich life are different from those of a happy life or a meaningful life, and report evidence suggesting that people leading psychologically rich lives tend to be more curious, think more holistically, and lean more politically liberal. Together, this work moves us beyond the dichotomy of hedonic versus eudaimonic well-being, and lays the foundation for the study of psychological richness as another dimension of a good life. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

29 citations


Journal ArticleDOI
TL;DR: Five main properties that are characteristic of existing SDT models of recognition memory are examined: i) random-scale representation, ii) latent-variable independence, iii) likelihood-ratio monotonicity, iv) ROC function asymmetry, and v) non-threshold representation.
Abstract: Signal detection theory (SDT) plays a central role in the characterization of human judgments in a wide range of domains, most prominently in recognition memory. But despite its success, many of its fundamental properties are often misunderstood, especially when it comes to its testability. The present work examines five main properties that are characteristic of existing SDT models of recognition memory: (a) random-scale representation, (b) latent-variable independence, (c) likelihood-ratio monotonicity, (d) ROC function asymmetry, and (e) nonthreshold representation. In each case, we establish testable consequences and test them against data collected in the appropriately designed recognition-memory experiment. We also discuss the connection between yes-no, forced-choice, and ranking judgments. This connection introduces additional behavioral constraints and yields an alternative method of reconstructing yes-no ROC functions. Overall, the reported results provide a strong empirical foundation for SDT modeling in recognition memory. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

22 citations


Journal ArticleDOI
TL;DR: The theoretical and empirical results suggest a positive answer to the question: Serial order in perception, memory, and action may be governed by the same underlying mechanism.
Abstract: This article asks whether serial order phenomena in perception, memory, and action are manifestations of a single underlying serial order process. The question is addressed empirically in two experiments that compare performance in whole report tasks that tap perception, serial recall tasks that tap memory, and copy typing tasks that tap action, using the same materials and participants. The data show similar effects across tasks that differ in magnitude, which is consistent with a single process operating under different constraints. The question is addressed theoretically by developing a Context Retrieval and Updating (CRU) theory of serial order, fitting it to the data from the two experiments, and generating predictions for 7 different summary measures of performance: list accuracy, serial position effects, transposition gradients, contiguity effects, error magnitudes, error types, and error ratios. Versions of the model that allowed sensitivity in perception and memory to decrease with serial position fit the data best and produced reasonably accurate predictions for everything but error ratios. Together, the theoretical and empirical results suggest a positive answer to the question: Serial order in perception, memory, and action may be governed by the same underlying mechanism. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

22 citations


Journal ArticleDOI
TL;DR: This is the first comprehensive study to utilize a changing information paradigm to jointly and quantitatively estimate the temporal dynamics of human decision-making and it is found that information processing is relative with early information influencing the perception of late information.
Abstract: Over the last decade, there has been a robust debate in decision neuroscience and psychology about what mechanism governs the time course of decision-making. Historically, the most prominent hypothesis is that neural architectures accumulate information over time until some threshold is met, the so-called Evidence Accumulation hypothesis. However, most applications of this theory rely on simplifying assumptions, belying a number of potential complexities. Is changing stimulus information perceived and processed in an independent manner or is there a relative component? Does urgency play a role? What about evidence leakage? Although the latter questions have been the subject of recent investigations, most studies to date have been piecemeal in nature, addressing one aspect of the decision process or another. Here we develop a modeling framework, an extension of the Urgency Gating Model, in conjunction with a changing information experimental paradigm to simultaneously probe these aspects of the decision process. Using state-of-the-art Bayesian methods to perform parameter-based inference, we find that (a) information processing is relative with early information influencing the perception of late information, (b) time varying urgency and evidence accumulation are of roughly equal strength in the decision process, and (c) leakage is present with a time scale of ∼200-250 ms. We also show that these effects can only be identified in a changing information paradigm. To our knowledge, this is the first comprehensive study to utilize a changing information paradigm to jointly and quantitatively estimate the temporal dynamics of human decision-making. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

21 citations


Journal ArticleDOI
TL;DR: In this paper, a new computational theory of the valence of mood, the Integrated Advantage model, was proposed to account for the bidirectional interaction between mood and reinforcement learning.
Abstract: Mood is an integrative and diffuse affective state that is thought to exert a pervasive effect on cognition and behavior. At the same time, mood itself is thought to fluctuate slowly as a product of feedback from interactions with the environment. Here we present a new computational theory of the valence of mood-the Integrated Advantage model-that seeks to account for this bidirectional interaction. Adopting theoretical formalisms from reinforcement learning, we propose to conceptualize the valence of mood as a leaky integral of an agent's appraisals of the Advantage of its actions. This model generalizes and extends previous models of mood wherein affective valence was conceptualized as a moving average of reward prediction errors. We give a full theoretical derivation of the Integrated Advantage model and provide a functional explanation of how an integrated-Advantage variable could be deployed adaptively by a biological agent to accelerate learning in complex and/or stochastic environments. Specifically, drawing on stochastic optimization theory, we propose that an agent can utilize our hypothesized form of mood to approximate a momentum-based update to its behavioral policy, thereby facilitating rapid learning of optimal actions. We then show how this model of mood provides a principled and parsimonious explanation for a number of contextual effects on mood from the affective science literature, including expectation- and surprise-related effects, counterfactual effects from information about foregone alternatives, action-typicality effects, and action/inaction asymmetry. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: It is shown that the triangle model is effective as a universal model of reading, able to replicate key behavioral and neuroscientific results and generates new predictions deriving from an explicit description of the effects of orthographic transparency on how reading is realized.
Abstract: Orthographic systems vary dramatically in the extent to which they encode a language's phonological and lexico-semantic structure Studies of the effects of orthographic transparency suggest that such variation is likely to have major implications for how the reading system operates However, such studies have been unable to examine in isolation the contributory effect of transparency on reading because of covarying linguistic or sociocultural factors We first investigated the phonological properties of languages using the range of the world's orthographic systems (alphabetic, alphasyllabic, consonantal, syllabic, and logographic), and found that, once geographical proximity is taken into account, phonological properties do not relate to orthographic system We then explored the processing implications of orthographic variation by training a connectionist implementation of the triangle model of reading on the range of orthographic systems while controlling for phonological and semantic structure We show that the triangle model is effective as a universal model of reading, able to replicate key behavioral and neuroscientific results The model also generates new predictions deriving from an explicit description of the effects of orthographic transparency on how reading is realized and defines the consequences of orthographic systems on reading processes (PsycInfo Database Record (c) 2020 APA, all rights reserved)

Journal ArticleDOI
TL;DR: In this article, the authors propose a timed racing diffusion model (TRDM) for decision processes, which is composed of two diffusive accumulation mechanisms, evidence-based and time-based, that compete in an independent race architecture.
Abstract: Classical dynamic theories of decision making assume that responses are triggered by accumulating a threshold amount of information. Recently, there has been a growing appreciation that the passage of time also plays a role in triggering responses. We propose that decision processes are composed of 2 diffusive accumulation mechanisms-1 evidence-based and 1 time-based-that compete in an independent race architecture. We show that this timed racing diffusion model (TRDM) provides a unified, comprehensive, and quantitatively accurate explanation of key decision phenomena-including the effects of implicit and explicit deadlines and the relative speed of correct and error responses under speed-accuracy trade-offs-without requiring additional mechanisms that have been criticized as being ad hoc in theoretical motivation and difficult to estimate, such as trial-to-trial variability parameters, collapsing thresholds, or urgency signals. In contrast, our addition is grounded in a widely validated account of time-estimation performance, enabling the same mechanism to simultaneously account for interval estimation and decision making with an explicit deadline. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: In this article, the authors present a rational analysis of the temporal structure of controlled behavior, which provides a formal account of these phenomena, and demonstrate that these accounts provide a mechanistically explicit and parsimonious account for a wide array of findings related to cognitive control.
Abstract: Cognitive fatigue and boredom are two phenomenological states that reflect overt task disengagement. In this article, we present a rational analysis of the temporal structure of controlled behavior, which provides a formal account of these phenomena. We suggest that in controlling behavior, the brain faces competing behavioral and computational imperatives, and must balance them by tracking their opportunity costs over time. We use this analysis to flesh out previous suggestions that feelings associated with subjective effort, like cognitive fatigue and boredom, are the phenomenological counterparts of these opportunity cost measures, instead of reflecting the depletion of resources as has often been assumed. Specifically, we propose that both fatigue and boredom reflect the competing value of particular options that require foregoing immediate reward but can improve future performance: Fatigue reflects the value of offline computation (internal to the organism) to improve future decisions, while boredom signals the value of exploration (external in the world). We demonstrate that these accounts provide a mechanistically explicit and parsimonious account for a wide array of findings related to cognitive control, integrating and reimagining them under a single, formally rigorous framework. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: The idea that judgment by representativeness reflects the workings of memory is explored, and it is found that decreasing the frequency of a given color in one group significantly increases the recalled frequency of that color in the other group.
Abstract: We explore the idea that judgment by representativeness reflects the workings of memory. In our model, the probability of a hypothesis conditional on data increases in the ease with which instances of that hypothesis are retrieved when cued with the data. Retrieval is driven by a measure of similarity which exhibits contextual interference: a data/cue is less likely to retrieve instances of a hypothesis that occurs frequently in other data. As a result, probability assessments are context dependent. In a new laboratory experiment, participants are shown two groups of images with different distributions of colors and other features. In line with the model's predictions, we find that (a) decreasing the frequency of a given color in one group significantly increases the recalled frequency of that color in the other group; and (b) cueing different features for the same set of images entails different probabilistic assessments, even if the features are normatively irrelevant. A calibration of the model yields a good quantitative fit with the data, highlighting the central role of contextual interference. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: In this paper, the authors defined the notion of linguistic context by using socially based contextual measures, derived from the online communication patterns of hundreds of thousands of individuals from the discussion forum Reddit, consisting of over 55 billion words.
Abstract: Contextual diversity (CD; Adelman, Brown, & Quesada, 2006) modifies word frequency by ignoring word repetition in context. It has been repeatedly found that a CD count provides a better fit to lexical organization data than does word frequency (e.g., Adelman & Brown, 2008; Brysbaert & New, 2009). The importance of CD has been interpreted with the principle of likely need, adapted from the rational analysis of memory (Anderson & Schooler, 1991), which states that words that have been used in many past contexts are more likely to be needed in a future context. Central to the cognitive mechanisms of computing likely need is a definition of linguistic context itself. Typically, linguistic context is defined by relatively small units of language, such as a document within a corpus. However, recent research has demonstrated that larger definitions of context, some spanning tens or hundreds of thousands of words, provide a better accounting of lexical organization data (Johns, Dye, & Jones, 2020). This article attempts to redefine the notion of linguistic context by using socially based contextual measures, derived from the online communication patterns of hundreds of thousands of individuals from the discussion forum Reddit, consisting of over 55 billion words. Multiple count-based and semantic diversity models of contextual diversity were derived from this data. The results demonstrate that the communication patterns of individuals across discourses provides the best accounting of lexical organization data, indicating that classic notions of using local linguistic context to update a word's strength in the lexicon need to be reevaluated. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: This paper applied a model of preferences about the presence and absence of information to the domain of decision making under risk and ambiguity, and found that people will wager more about events that they enjoy (rather than dislike) thinking about.
Abstract: We apply a model of preferences about the presence and absence of information to the domain of decision making under risk and ambiguity. An uncertain prospect exposes an individual to 1 or more information gaps, specific unanswered questions that capture attention. Gambling makes these questions more important, attracting more attention to them. To the extent that the uncertainty (or other circumstances) makes these information gaps unpleasant to think about, an individual tends to be averse to risk and ambiguity. Yet in circumstances in which thinking about an information gap is pleasant, an individual may exhibit risk- and ambiguity-seeking. The model provides explanations for source preference regarding uncertainty, the comparative ignorance effect under conditions of ambiguity, aversion to compound risk, and a variety of other phenomena. We present 2 empirical tests of one of the model's novel predictions, which is that people will wager more about events that they enjoy (rather than dislike) thinking about. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: In this paper, a computational model that characterizes formally the ways we perceive or misperceive bodily symptoms, in the context of panic attacks, is presented, which considers top-down prediction and attention dynamics as key to perceptual inference and action selection.
Abstract: We advance a novel computational model that characterizes formally the ways we perceive or misperceive bodily symptoms, in the context of panic attacks. The computational model is grounded within the formal framework of Active Inference, which considers top-down prediction and attention dynamics as key to perceptual inference and action selection. In a series of simulations, we use the computational model to reproduce key facets of adaptive and maladaptive symptom perception: the ways we infer our bodily state by integrating prior information and somatic afferents; the ways we decide whether or not to attend to somatic channels; the ways we use the symptom inference to make decisions about taking or not taking a medicine; and the ways all the above processes can go awry, determining symptom misperception and ensuing maladaptive behaviors, such as hypervigilance or excessive medicine use. While recent existing theoretical treatments of psychopathological conditions focus on prediction-based perception (predictive coding), our computational model goes beyond them, in at least two ways. First, it includes action and attention selection dynamics that are disregarded in previous conceptualizations but are crucial to fully understand the phenomenology of bodily symptom perception and misperception. Second, it is a fully implemented model that generates specific (and personalized) quantitative predictions, thus going beyond previous qualitative frameworks. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: Grounding people's inferences in CET demonstrates how the behaviors of a boundedly rational mind can be better predicted once accounts of the mind and the environment are fused.
Abstract: In many choice environments, risks and rewards-or probabilities and payoffs-seem tightly coupled such that high payoffs only occur with low probabilities. An adaptive mind can exploit this association by, for instance, using a potential reward's size to infer the probability of obtaining it. However, a mind can only adapt to and exploit an environmental structure if it is ecologically reliable, that is if it is frequent and recurrent. We develop the competitive risk-reward ecology theory (CET) that establishes how the ecology of competition can make the association of high rewards with low probabilities ubiquitous. This association occurs because of what is known as the ideal free distribution (IFD) principle. The IFD states that competitors in a landscape of resource patches distribute themselves proportionally to the gross total amount of resources in the patches. CET shows how this principle implies a risk-reward structure: an inverse relationship between probabilities and payoffs. It also identifies boundary conditions for the risk-reward structure, including heterogeneity of resources, computational limits of competitors, and scarcity of resources. Finally, a set of empirical studies (N = 1,255) demonstrate that people's beliefs map onto properties predicted by CET and change as a function of the environment. In sum, grounding people's inferences in CET demonstrates how the behaviors of a boundedly rational mind can be better predicted once accounts of the mind and the environment are fused. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: The results suggest that principles of order in natural language can be explained via highly generic cognitively motivated principles and lend support to efficiency-based models of the structure of human language.
Abstract: Memory limitations are known to constrain language comprehension and production, and have been argued to account for crosslinguistic word order regularities. However, a systematic assessment of the role of memory limitations in language structure has proven elusive, in part because it is hard to extract precise large-scale quantitative generalizations about language from existing mechanistic models of memory use in sentence processing. We provide an architecture-independent information-theoretic formalization of memory limitations which enables a simple calculation of the memory efficiency of languages. Our notion of memory efficiency is based on the idea of a memory-surprisal trade-off: A certain level of average surprisal per word can only be achieved at the cost of storing some amount of information about the past context. Based on this notion of memory usage, we advance the Efficient Trade-off Hypothesis: The order of elements in natural language is under pressure to enable favorable memory-surprisal trade-offs. We derive that languages enable more efficient trade-offs when they exhibit information locality: When predictive information about an element is concentrated in its recent past. We provide empirical evidence from three test domains in support of the Efficient Trade-off Hypothesis: A reanalysis of a miniature artificial language learning experiment, a large-scale study of word order in corpora of 54 languages, and an analysis of morpheme order in two agglutinative languages. These results suggest that principles of order in natural language can be explained via highly generic cognitively motivated principles and lend support to efficiency-based models of the structure of human language. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: This article argued that blindsight is due to response criterion artifacts under degraded conscious vision, but this view may not work well when one considers several key findings in conjunction, such as the fact that not all criterion effects are decidedly nonperceptual.
Abstract: Phillips argues that blindsight is due to response criterion artifacts under degraded conscious vision. His view provides alternative explanations for some studies, but may not work well when one considers several key findings in conjunction. Empirically, not all criterion effects are decidedly nonperceptual. Awareness is not completely abolished for some stimuli, in some patients. But in other cases, it is clearly impaired relative to the corresponding visual sensitivity. This relative dissociation is what makes blindsight so important and interesting. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: In this paper, a generative model that describes how confidence judgments result from some confidence evidence is presented, which can be used to estimate whether confidence is generated using the same primary information used for the perceptual decision or some secondary information.
Abstract: Perceptual confidence is an evaluation of the validity of our perceptual decisions. We present here a complete generative model that describes how confidence judgments result from some confidence evidence. The model that generates confidence evidence has two main parameters, confidence noise and confidence boost. Confidence noise reduces the sensitivity to the confidence evidence, and confidence boost accounts for information used for confidence judgment which was not used for the perceptual decision. The opposite effect of these two parameters creates a problem of confidence parameters indeterminacy, where the confidence in a perceptual decision is the same in spite of differences in confidence noise and confidence boost. When confidence is estimated for multiple stimulus strengths, both of these parameters can be recovered, thus allowing us to estimate whether confidence is generated using the same primary information that was used for the perceptual decision or some secondary information. We also describe a novel measure of confidence efficiency relative to the ideal confidence observer, as well as the estimate of one type of confidence bias. Finally, we apply the model to the confidence forced-choice paradigm, a paradigm that provides objective estimates of confidence, and we discuss how each parameter of the model can be recovered using this paradigm. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: In this article, the options framework in hierarchical reinforcement learning provides a theoretical framework for representing such transferable strategies, where options are abstract multi-step policies, assembled from simpler one-step actions or other options that can represent meaningful reusable strategies as temporal abstractions.
Abstract: Humans use prior knowledge to efficiently solve novel tasks, but how they structure past knowledge during learning to enable such fast generalization is not well understood. We recently proposed that hierarchical state abstraction enabled generalization of simple one-step rules, by inferring context clusters for each rule. However, humans' daily tasks are often temporally extended, and necessitate more complex multi-step, hierarchically structured strategies. The options framework in hierarchical reinforcement learning provides a theoretical framework for representing such transferable strategies. Options are abstract multi-step policies, assembled from simpler one-step actions or other options, that can represent meaningful reusable strategies as temporal abstractions. We developed a novel sequential decision-making protocol to test if humans learn and transfer multi-step options. In a series of four experiments, we found transfer effects at multiple hierarchical levels of abstraction that could not be explained by flat reinforcement learning models or hierarchical models lacking temporal abstractions. We extended the options framework to develop a quantitative model that blends temporal and state abstractions. Our model captures the transfer effects observed in human participants. Our results provide evidence that humans create and compose hierarchical options, and use them to explore in novel contexts, consequently transferring past knowledge and speeding up learning. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: This article developed an episodic flanker task that is analogous to the well-known perceptual flanker tasks and developed models of the spotlight of attention focused on a memory list, which supported the conjecture that memory retrieval is attention turned inward.
Abstract: This article tests the conjecture that memory retrieval is attention turned inward by developing an episodic flanker task that is analogous to the well-known perceptual flanker task and by developing models of the spotlight of attention focused on a memory list. Participants were presented with a list to remember (ABCDEF) followed by a probe in which one letter was cued (# # C # # #). The task was to indicate whether the cued letter matched the letter in the cued position in the memory list. The data showed classic results from the perceptual flanker task. Response time and accuracy were affected by the distance between the cued letter in the probe and the memory list (# # D # # # was worse than # # E # # #) and by the compatibility of the uncued letters in the probe and the memory list (ABCDEF was better than STCRVX). There were six experiments. The first four established distance and compatibility effects. The fifth generalized the results to sequential presentation of memory lists, and the sixth tested the boundary conditions of distance and flanker effects with an item recognition task. The data were fitted with three families of models that apply space-based, object-based, and template-based theories of attention to the problem of focusing attention on the cued item in memory. The models accounted for the distance and compatibility effects, providing measures of the sharpness of the focus of attention on memory and the ability to ignore distraction from uncued items. Together, the data and theory support the conjecture that memory retrieval is attention turned inward and motivate further research on the topic. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: In this paper, the authors introduce a new model of secrecy, which defines secrecy as an intention to keep information unknown by one or more others, rather than an action (active concealment).
Abstract: Secrecy is a common and consequential human experience, and yet the literature lacks an integrative theoretical model that captures this broad experience. Whereas initial research focused on concealment (an action a person may take to keep a secret), recent literature documents the broader experience of having a secret. For instance, even if a secret is not being concealed in the moment, one's mind can still wander to thoughts of the secret with consequences for well-being. Integrating several disparate literatures, the present work introduces a new model of secrecy. Rather than define secrecy as an action (active concealment), the model defines secrecy as an intention to keep information unknown by one or more others. Like any other intention, secrecy increases sensitivity to internal or external cues related to the intention. Critically, secret-relevant thoughts are cued in one of two broad contexts: (a) during a social interaction that calls for concealment, and (b) the situations outside of those social interactions, where concealment is not required. Having a secret come to mind in these two very different situations evokes a set of distinct processes and outcomes. Concealment (enacting one's secrecy intention) predicts monitoring, expressive inhibition, and alteration, which consumes regulatory resources and may result in lower interaction quality. Mind-wandering to the secret (when concealment is not required) involves passively thinking about the content of the secret. Engagement with these thoughts may lead to repetitive thinking and rumination, reflection on how one feels about the secret, efforts to cope, or specific plans for how to handle the secret. The model brings together a number of literatures with implications for secrecy, identity concealment, relationships, mind-wandering, coping, health and well-being. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
Daniel Algom1
TL;DR: The roots and derivations of the relevant laws are discussed, eschewing formalism to bare essentials for sake of wider accessibility and illustrating confusions in the literature coming from misapplications of Weber's law and the use of misnomer.
Abstract: The term "Weber-Fechner law" is arguably the most widely used misnomer in psychological science. The unification reflects a failure to appreciate the logical independence and disparate implications of Weber's law and Fechner's law as well as some closely aligned ones. The present statement, long overdue, is meant to rectify this situation. I discuss the roots and derivations of the relevant laws, eschewing formalism to bare essentials for sake of wider accessibility. Three of the most important conclusions are (a) Weber's law is not indispensable for deriving Fechner's law; (b) arguably, Fechner himself did not use Weber's law in his original derivations; and (c) many investigators mistake the principle that subjective distance is determined by physical ratio for Weber's law. In truth, the principle, here called the Weber principle, and Weber's law, are different and independent. I stress the importance of drawing the distinction and illustrate confusions in the literature coming from misapplications of Weber's law and the use of misnomer. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: In this paper, structural parameter interdependencies between value function and choice rule parameters across several prominent computational models, including models on risky choice (cumulative prospect theory), categorization (the generalized context model), and memory (the SIMPLE model of free recall).
Abstract: Computational modeling of cognition allows latent psychological variables to be measured by means of adjustable model parameters. The estimation and interpretation of the parameters are impaired, however, if parameters are strongly intercorrelated within the model. We point out that strong parameter interdependencies are especially likely to emerge in models that combine a subjective value function with a probabilistic choice rule-a common structure in the literature. We trace structural parameter interdependencies between value function and choice rule parameters across several prominent computational models, including models on risky choice (cumulative prospect theory), categorization (the generalized context model), and memory (the SIMPLE model of free recall). Using simulation studies with a generic choice model, we show that the accuracy in parameter estimation is hampered in the presence of high parameter intercorrelations, particularly the ability to detect group differences on the parameters and associations of the parameters with external variables. We demonstrate that these problems can be alleviated by using a different specification of stochasticity in the model, for example, by assuming parameter stochasticity or a constant error term. In addition, application to two empirical data sets of risky choice shows that alleviating parameter interdependencies in this way can lead to different conclusions about the estimated parameters. Our analyses highlight a common but often neglected problem of computational models of cognition and identify ways in which the design and application of such models can be improved. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: In this paper, a spectrum of models that ranged from independent accumulation to fully dependent accumulation, while also examining the effects of within-trial and between-trial variability (BTV) were designed.
Abstract: The dynamics of decision-making have been widely studied over the past several decades through the lens of an overarching theory called sequential sampling theory (SST). Within SST, choices are represented as accumulators, each of which races toward a decision boundary by drawing stochastic samples of evidence through time. Although progress has been made in understanding how decisions are made within the SST framework, considerable debate centers on whether the accumulators exhibit dependency during the evidence accumulation process; namely, whether accumulators are independent, fully dependent, or partially dependent. To evaluate which type of dependency is the most plausible representation of human decision-making, we applied a novel twist on two classic perceptual tasks; namely, in addition to the classic paradigm (i.e., the unequal-evidence conditions), we used stimuli that provided different magnitudes of equal-evidence (i.e., the equal-evidence conditions). In equal-evidence conditions, response times systematically decreased with increase in the magnitude of evidence, whereas in unequal-evidence conditions, response times systematically increased as the difference in evidence between the two alternatives decreased. We designed a spectrum of models that ranged from independent accumulation to fully dependent accumulation, while also examining the effects of within-trial and between-trial variability (BTV). We then fit the set of models to our two experiments and found that models instantiating the principles of partial dependency provided the best fit to the data. Our results further suggest that mechanisms inducing partial dependency, such as lateral inhibition, are beneficial for understanding complex decision-making dynamics, even when the task is relatively simple. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: The context-unified encoding (CUE) model is presented, a large-scale spiking neural network model of human memory that combines and integrates activity-based short-term memory (STM) with weight-based long- term memory and ensures biological plausibility and allows for predictions on the neural level.
Abstract: We present the context-unified encoding (CUE) model, a large-scale spiking neural network model of human memory. It combines and integrates activity-based short-term memory (STM) with weight-based long-term memory. The implementation with spiking neurons ensures biological plausibility and allows for predictions on the neural level. At the same time, the model produces behavioral outputs that have been matched to human data from serial and free recall experiments. In particular, well-known results such as primacy, recency, transposition error gradients, and forward recall bias have been reproduced with good quantitative matches. Additionally, the model accounts for the Hebb repetition effect. The CUE model combines and extends the ordinal serial encoding model, a spiking neuron model of STM, and the temporal context model, a mathematical memory model matching free recall data. To implement the modification of the required association matrices, a novel learning rule, the association matrix learning rule, is derived that allows for one-shot learning without catastrophic forgetting. Its biological plausibility is discussed and it is shown that it accounts for changes in neural firing observed in human recordings from an association learning experiment. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: In this paper, a Bayesian approach was proposed for one representative computational model of sentence reading (SWIFT), which was fitted to subjects and experimental conditions individually to investigate between-subject variability.
Abstract: In eye-movement control during reading, advanced process-oriented models have been developed to reproduce behavioral data. So far, model complexity and large numbers of model parameters prevented rigorous statistical inference and modeling of interindividual differences. Here we propose a Bayesian approach to both problems for one representative computational model of sentence reading (SWIFT; Engbert et al., Psychological Review, 112, 2005, pp. 777-813). We used experimental data from 36 subjects who read the text in a normal and one of four manipulated text layouts (e.g., mirrored and scrambled letters). The SWIFT model was fitted to subjects and experimental conditions individually to investigate between-subject variability. Based on posterior distributions of model parameters, fixation probabilities and durations are reliably recovered from simulated data and reproduced for withheld empirical data, at both the experimental condition and subject levels. A subsequent statistical analysis of model parameters across reading conditions generates model-driven explanations for observable effects between conditions. (PsycInfo Database Record (c) 2021 APA, all rights reserved).