scispace - formally typeset
Search or ask a question

Showing papers in "Psychological Review in 2018"


Journal ArticleDOI
TL;DR: This work provides a comprehensive theoretical review and organization of a psychologically informed approach to social problems, one that encompasses a wide-range of interventions and applies to diverse problem areas.
Abstract: Long-standing social problems such as poor achievement, personal and intergroup conflict, bad health, and unhappiness can seem like permanent features of the social landscape. We describe an approach to such problems rooted in basic theory and research in social psychology. This approach emphasizes subjective meaning-making-working hypotheses people draw about themselves, other people, and social situations; how deleterious meanings can arise from social and cultural contexts; how interventions to change meanings can help people flourish; and how initial change can become embedded to alter the course of people's lives. We further describe how this approach relates to and complements other prominent approaches to social reform, which emphasize not subjective meaning-making but objective change in situations or in the habits and skills of individuals. In so doing, we provide a comprehensive theoretical review and organization of a psychologically informed approach to social problems, one that encompasses a wide-range of interventions and applies to diverse problem areas. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

261 citations


Journal ArticleDOI
TL;DR: A unifying, comprehensive theoretical framework for understanding dark personality in terms of a general dispositional tendency of which dark traits arise as specific manifestations, which is called the Dark Factor of Personality (D).
Abstract: Many negatively connoted personality traits (often termed "dark traits") have been introduced to account for ethically, morally, and socially questionable behavior. Herein, we provide a unifying, comprehensive theoretical framework for understanding dark personality in terms of a general dispositional tendency of which dark traits arise as specific manifestations. That is, we theoretically specify the common core of dark traits, which we call the Dark Factor of Personality (D). The fluid concept of D captures individual differences in the tendency to maximize one's individual utility-disregarding, accepting, or malevolently provoking disutility for others-accompanied by beliefs that serve as justifications. To critically test D, we unify and extend prior work methodologically and empirically by considering a large number of dark traits simultaneously, using statistical approaches tailored to capture both the common core and the unique content of dark traits, and testing the predictive validity of both D and the unique content of dark traits with respect to diverse criteria including fully consequential and incentive-compatible behavior. In a series of four studies (N > 2,500), we provide evidence in support of the theoretical conceptualization of D, show that dark traits can be understood as specific manifestations of D, demonstrate that D predicts a multitude of criteria in the realm of ethically, morally, and socially questionable behavior, and illustrate that D does not depend on any particular indicator variable included. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

233 citations


Journal ArticleDOI
TL;DR: A new model that describes boredom as an affective indicator of unsuccessful attentional engagement in valued goal-congruent activity is presented and empirical support for four novel predictions made by the model is presented.
Abstract: What is boredom? We review environmental, attentional, and functional theories and present a new model that describes boredom as an affective indicator of unsuccessful attentional engagement in valued goal-congruent activity. According to the Meaning and Attentional Components (MAC) model, boredom is the result of (a) an attentional component, namely mismatches between cognitive demands and available mental resources, and (b) a meaning component, namely mismatches between activities and valued goals (or the absence of valued goals altogether). We present empirical support for four novel predictions made by the model: (a) Deficits in attention and meaning each produce boredom independently of the other; (b) there are different profiles of boredom that result from specific deficits in attention and meaning; (c) boredom results from two types of attentional deficits, understimulation and overstimulation; and (d) the model explains not only when and why people become bored with external activities, but also when and why people become bored with their own thoughts. We discuss further implications of the model, such as when boredom motivates people to seek interesting versus enjoyable activities. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

154 citations


Journal ArticleDOI
TL;DR: It is argued that focusing on the contrast between dynamic and static modeling approaches has led to an unrealistic view of testing and finding support for the network approach, as well as an oversimplified picture of the relationship between medical diseases and mental disorders.
Abstract: The network approach to psychopathology is becoming increasingly popular. The motivation for this approach is to provide a replacement for the problematic common cause perspective and the associated latent variable model, where symptoms are taken to be mere effects of a common cause (the disorder itself). The idea is that the latent variable model is plausible for medical diseases, but unrealistic for mental disorders, which should rather be conceptualized as networks of directly interacting symptoms. We argue that this rationale for the network approach is misguided. Latent variable (or common cause) models are not inherently problematic, and there is not even a clear boundary where network models end and latent variable (or common cause) models begin. We also argue that focusing on this contrast has led to an unrealistic view of testing and finding support for the network approach, as well as an oversimplified picture of the relationship between medical diseases and mental disorders. As an alternative, we point out more essential contrasts, such as the contrast between dynamic and static modeling approaches that can provide a better framework for conceptualizing mental disorders. Finally, we discuss several topics and open problems that need to be addressed in order to make the network approach more concrete and to move the field of psychological network research forward. (PsycINFO Database Record

141 citations


Journal ArticleDOI
TL;DR: A computational model of reading, OB1-reader, which integrates insights from both literatures and provides a fruitful and parsimonious theoretical framework for understanding reading behavior is presented.
Abstract: Decades of reading research have led to sophisticated accounts of single-word recognition and, in parallel, accounts of eye-movement control in text reading. Although these two endeavors have strongly advanced the field, their relative independence has precluded an integrated account of the reading process. To bridge the gap, we here present a computational model of reading, OB1-reader, which integrates insights from both literatures. Key features of OB1 are as follows: (1) parallel processing of multiple words, modulated by an attentional window of adaptable size; (2) coding of input through a layer of open bigram nodes that represent pairs of letters and their relative position; (3) activation of word representations based on constituent bigram activity, competition with other word representations and contextual predictability; (4) mapping of activated words onto a spatiotopic sentence-level representation to keep track of word order; and (5) saccade planning, with the saccade goal being dependent on the length and activation of surrounding word units, and the saccade onset being influenced by word recognition. A comparison of simulation results with experimental data shows that the model provides a fruitful and parsimonious theoretical framework for understanding reading behavior. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

126 citations


Journal ArticleDOI
TL;DR: The model accounts for executive influences on semantics by including a controlled retrieval mechanism that provides top-down input to amplify weak semantic relationships and successfully codes knowledge for abstract and concrete words, associative and taxonomic relationships, and the multiple meanings of homonyms, within a single representational space.
Abstract: Semantic cognition requires conceptual representations shaped by verbal and nonverbal experience and executive control processes that regulate activation of knowledge to meet current situational demands. A complete model must also account for the representation of concrete and abstract words, of taxonomic and associative relationships, and for the role of context in shaping meaning. We present the first major attempt to assimilate all of these elements within a unified, implemented computational framework. Our model combines a hub-and-spoke architecture with a buffer that allows its state to be influenced by prior context. This hybrid structure integrates the view, from cognitive neuroscience, that concepts are grounded in sensory-motor representation with the view, from computational linguistics, that knowledge is shaped by patterns of lexical co-occurrence. The model successfully codes knowledge for abstract and concrete words, associative and taxonomic relationships, and the multiple meanings of homonyms, within a single representational space. Knowledge of abstract words is acquired through (a) their patterns of co-occurrence with other words and (b) acquired embodiment, whereby they become indirectly associated with the perceptual features of co-occurring concrete words. The model accounts for executive influences on semantics by including a controlled retrieval mechanism that provides top-down input to amplify weak semantic relationships. The representational and control elements of the model can be damaged independently, and the consequences of such damage closely replicate effects seen in neuropsychological patients with loss of semantic representation versus control processes. Thus, the model provides a wide-ranging and neurally plausible account of normal and impaired semantic cognition. (PsycINFO Database Record

122 citations


Journal Article
TL;DR: A new scale is developed, refine, and validate to dissociate individual differences in the ‘negative’ and ‘positive’ dimensions of utilitarian thinking as manifested in the general population, and it is shown that these are two independent dimensions of proto-utilitarian tendencies in the lay population, each exhibiting a distinct psychological profile.
Abstract: Recent research has relied on trolley-type sacrificial moral dilemmas to study utilitarian versus nonutilitarian modes of moral decision-making. This research has generated important insights into people’s attitudes toward instrumental harm—that is, the sacrifice of an individual to save a greater number. But this approach also has serious limitations. Most notably, it ignores the positive, altruistic core of utilitarianism, which is characterized by impartial concern for the well-being of everyone, whether near or far. Here, we develop, refine, and validate a new scale—the Oxford Utilitarianism Scale—to dissociate individual differences in the ‘negative’ (permissive attitude toward instrumental harm) and ‘positive’ (impartial concern for the greater good) dimensions of utilitarian thinking as manifested in the general population. We show that these are two independent dimensions of proto-utilitarian tendencies in the lay population, each exhibiting a distinct psychological profile. Empathic concern, identification with the whole of humanity, and concern for future generations were positively associated with impartial beneficence but negatively associated with instrumental harm; and although instrumental harm was associated with subclinical psychopathy, impartial beneficence was associated with higher religiosity. Importantly, although these two dimensions were independent in the lay population, they were closely associated in a sample of moral philosophers. Acknowledging this dissociation between the instrumental harm and impartial beneficence components of utilitarian thinking in ordinary people can clarify existing debates about the nature of moral psychology and its relation to moral philosophy as well as generate fruitful avenues for further research.

76 citations


Journal ArticleDOI
TL;DR: It is proposed here that a behavioral ecological perspective, particularly the idea of adaptive phenotypic plasticity, can provide an overarching framework for thinking about psychological variation across cultures and societies.
Abstract: Recent work has documented a wide range of important psychological differences across societies. Multiple explanations have been offered for why such differences exist, including historical philosophies, subsistence methods, social mobility, social class, climactic stresses, and religion. With the growing body of theory and data, there is an emerging need for an organizing framework. We propose here that a behavioral ecological perspective, particularly the idea of adaptive phenotypic plasticity, can provide an overarching framework for thinking about psychological variation across cultures and societies. We focus on how societies vary as a function of six important ecological dimensions: density, relatedness, sex ratio, mortality likelihood, resources, and disease. This framework can: (a) highlight new areas of research, (b) integrate and ground existing cultural psychological explanations, (c) integrate research on variation across human societies with research on parallel variations in other animal species, (d) provide a way for thinking about multiple levels of culture and cultural change, and (e) facilitate the creation of an ecological taxonomy of societies, from which one can derive specific predictions about cultural differences and similarities. Finally, we discuss the relationships between the current framework and existing perspectives. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

71 citations


Journal ArticleDOI
TL;DR: The results reveal a normative rationale for center-surround connectivity in working memory circuitry, call for reevaluation of memory performance differences that have previously been attributed to differences in capacity, and support a more nuanced view of visual working memory capacity limitations.
Abstract: The nature of capacity limits for visual working memory has been the subject of an intense debate that has relied on models that assume items are encoded independently. Here we propose that instead, similar features are jointly encoded through a "chunking" process to optimize performance on visual working memory tasks. We show that such chunking can: (a) facilitate performance improvements for abstract capacity-limited systems, (b) be optimized through reinforcement, (c) be implemented by center-surround dynamics, and (d) increase effective storage capacity at the expense of recall precision. Human performance on a variant of a canonical working memory task demonstrated performance advantages, precision detriments, interitem dependencies, and trial-to-trial behavioral adjustments diagnostic of performance optimization through center-surround chunking. Models incorporating center-surround chunking provided a better quantitative description of human performance in our study as well as in a meta-analytic dataset, and apparent differences in working memory capacity across individuals were attributable to individual differences in the implementation of chunking. Our results reveal a normative rationale for center-surround connectivity in working memory circuitry, call for reevaluation of memory performance differences that have previously been attributed to differences in capacity, and support a more nuanced view of visual working memory capacity limitations: strategic tradeoff between storage capacity and memory precision through chunking contribute to flexible capacity limitations that include both discrete and continuous aspects. (PsycINFO Database Record

71 citations


Journal ArticleDOI
TL;DR: Findings from process-tracing studies are used to constrain the evidence accumulation process and this article proposes the multialternative decision by sampling (MDbS) model, which provides a quantitative account of the attraction, compromise, and similarity effects equal to that of other models, and captures a wider range of empirical phenomena than other models.
Abstract: Sequential sampling of evidence, or evidence accumulation, has been implemented in a variety of models to explain a range of multialternative choice phenomena. But the existing models do not agree on what, exactly, the evidence is that is accumulated. They also do not agree on how this evidence is accumulated. In this article, we use findings from process-tracing studies to constrain the evidence accumulation process. With these constraints, we extend the decision by sampling model and propose the multialternative decision by sampling (MDbS) model. In MDbS, the evidence accumulated is outcomes of pairwise ordinal comparisons between attribute values. MDbS provides a quantitative account of the attraction, compromise, and similarity effects equal to that of other models, and captures a wider range of empirical phenomena than other models. (PsycINFO Database Record

66 citations


Journal ArticleDOI
TL;DR: A detailed process theory is proposed within Braver’s (2012) proactive and reactive framework of the way control is maintained over the competing demands of prospective memory decisions and decisions associated with ongoing task activities, instantiated in a quantitative “Prospective Memory Decision Control” architecture.
Abstract: Event-based prospective memory (PM) requires remembering to perform intended deferred actions when particular stimuli or events are encountered in the future. We propose a detailed process theory within Braver's (2012) proactive and reactive framework of the way control is maintained over the competing demands of prospective memory decisions and decisions associated with ongoing task activities. The theory is instantiated in a quantitative "Prospective Memory Decision Control" (PMDC) architecture, which uses linear ballistic evidence accumulation (Brown & Heathcote, 2008) to model both PM and ongoing decision processes. Prospective control is exerted via decision thresholds, as in Heathcote, Loft, and Remington's (2015) "Delay Theory" of the impact of PM demands on ongoing-task decisions. However, PMDC goes beyond Delay Theory by simultaneously accounting for both PM task decisions and ongoing task decisions. We use Bayesian estimation to apply PMDC to experiments manipulating PM target focality (i.e., the extent to which the ongoing task directs attention to the features of PM targets processed at encoding) and the relative importance of the PM task. As well as confirming Delay Theory's proactive control of ongoing task thresholds, the comprehensive account provided by PMDC allowed us to detect both proactive control of the PM accumulator threshold and reactive control of the relative rates of the PM and ongoing-task evidence accumulation processes. We discuss potential extensions of PMDC to account for other factors that may be prevalent in real-world PM, such as failures of memory retrieval. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

Journal ArticleDOI
TL;DR: This paper provides a normative justification for DbS that is based on the principle of efficient coding, which helps descriptively account for a wider set of behavioral observations, such as how context sensitivity varies with the number of available response categories.
Abstract: The theory of decision by sampling (DbS) proposes that an attribute's subjective value is its rank within a sample of attribute values retrieved from memory. This can account for instances of context dependence beyond the reach of classic theories that assume stable preferences. In this paper, we provide a normative justification for DbS that is based on the principle of efficient coding. The efficient representation of information in a noiseless communication channel is characterized by a uniform response distribution, which the rank transformation implements. However, cognitive limitations imply that decision samples are finite, introducing noise. Efficient coding in a noisy channel requires smoothing of the signal, a principle that leads to a new generalization of DbS. This generalization is closely connected to range-frequency theory, and helps descriptively account for a wider set of behavioral observations, such as how context sensitivity varies with the number of available response categories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

Journal ArticleDOI
TL;DR: This work conceptualizes intrinsic motivation (IM) as (perceived) means-ends fusion and defines an intrinsicality continuum reflecting the degree to which such fusion is experienced, as well as identifying two major consequences of the activity-goal fusion.
Abstract: The term intrinsic motivation refers to an activity being seen as its own end Accordingly, we conceptualize intrinsic motivation (IM) as (perceived) means-ends fusion and define an intrinsicality continuum reflecting the degree to which such fusion is experienced Our means-ends fusion (MEF) theory assumes four major antecedents of activity-goal fusion: (a) repeated pairing of the activity and the goal, (b) uniqueness of the activity-goal connection, (c) perceived similarity between the activity and its goal, and (d) temporal immediacy of goal attainment following the activity MEF theory further identifies two major consequences of the activity-goal fusion (ie, manifestations of intrinsic motivation): (a) perceived instrumentality of the activity to goal attainment and consequent activity engagement, and (b) goal-related affective experience of the activity Empirical evidence for MEF theory comes from diverse fields of psychological inquiry, including animal learning, brain research, and social cognition (PsycINFO Database Record

Journal ArticleDOI
TL;DR: This article presents a dynamic dual process model framework of risky decision making that provides an account of the timing and interaction of the 2 systems and can explain both choice and response-time data.
Abstract: Many phenomena in judgment and decision making are often attributed to the interaction of 2 systems of reasoning. Although these so-called dual process theories can explain many types of behavior, they are rarely formalized as mathematical or computational models. Rather, dual process models are typically verbal theories, which are difficult to conclusively evaluate or test. In the cases in which formal (i.e., mathematical) dual process models have been proposed, they have not been quantitatively fit to experimental data and are often silent when it comes to the timing of the 2 systems. In the current article, we present a dynamic dual process model framework of risky decision making that provides an account of the timing and interaction of the 2 systems and can explain both choice and response-time data. We outline several predictions of the model, including how changes in the timing of the 2 systems as well as time pressure can influence behavior. The framework also allows us to explore different assumptions about how preferences are constructed by the 2 systems as well as the dynamic interaction of the 2 systems. In particular, we examine 3 different possible functional forms of the 2 systems and 2 possible ways the systems can interact (simultaneously or serially). We compare these dual process models with 2 single process models using risky decision making data from Guo, Trueblood, and Diederich (2017). Using this data, we find that 1 of the dual process models significantly outperforms the other models in accounting for both choices and response times. (PsycINFO Database Record

Journal Article
TL;DR: The authors used a blind, collaborative approach to assess the validity of model-based inferences about psychological factors, including ease of processing, response caution, response bias, and nondecision time.
Abstract: Most data analyses rely on models. To complement statistical models, psychologists have developed cognitive models, which translate observed variables into psychologically interesting constructs. Response time models, in particular, assume that response time and accuracy are the observed expression of latent variables including 1) ease of processing, 2) response caution, 3) response bias, and 4) non–decision time. Inferences about these psychological factors, hinge upon the validity of the models’ parameters. Here, we use a blinded, collaborative approach to assess the validity of such model-based inferences. Seventeen teams of researchers analyzed the same 14 data sets. In each of these two–condition data sets, we manipulated properties of participants’ behavior in a two–alternative forced choice task. The contributing teams were blind to the manipulations, and had to infer what aspect of behavior was changed using their method of choice. The contributors chose to employ a variety of models, estimation methods, and inference procedures. Our results show that, although conclusions were similar across different methods, these "modeler’s degrees of freedom" did affect their inferences. Interestingly, many of the simpler approaches yielded as robust and accurate inferences as the more complex methods. We recommend that, in general, cognitive models become a typical analysis tool for response time data. In particular, we argue that the simpler models and procedures are sufficient for standard experimental designs. We finish by outlining situations in which more complicated models and methods may be necessary, and discuss potential pitfalls when interpreting the output from response time models.

Journal ArticleDOI
TL;DR: A new process model of social judgment is proposed, the social sampling model (SSM), which provides a parsimonious quantitative account of different types of social judgments, and explains some previously unaccounted-for patterns of self-enhancement and self-depreciation.
Abstract: Studies of social judgments have demonstrated a number of diverse phenomena that were so far difficult to explain within a single theoretical framework. Prominent examples are false consensus and false uniqueness, as well as self-enhancement and self-depreciation. Here we show that these seemingly complex phenomena can be a product of an interplay between basic cognitive processes and the structure of social and task environments. We propose and test a new process model of social judgment, the social sampling model (SSM), which provides a parsimonious quantitative account of different types of social judgments. In the SSM, judgments about characteristics of broader social environments are based on sampling of social instances from memory, where instances receive activation if they belong to a target reference class and have a particular characteristic. These sampling processes interact with the properties of social and task environments, including homophily, shapes of frequency distributions, and question formats. For example, in line with the model's predictions we found that whether false consensus or false uniqueness will occur depends on the level of homophily in people's social circles and on the way questions are asked. The model also explains some previously unaccounted-for patterns of self-enhancement and self-depreciation. People seem to be well informed about many characteristics of their immediate social circles, which in turn influence how they evaluate broader social environments and their position within them. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: It is argued that the overall efficiency of mitochondrial functioning is a core component of g; the most fundamental biological mechanism common to all brain and cognitive processes and that contributes to the relations among intelligence, health, and aging.
Abstract: General intelligence or g is one of the most thoroughly studied concepts in the behavioral sciences. Measures of intelligence are predictive of a wide range of educational, occupational, and life outcomes, including creative productivity and are systematically related to physical health and successful aging. The nexus of relations suggests 1 or several fundamental biological mechanisms underlie g, health, and aging, among other outcomes. Cell-damaging oxidative stress has been proposed as 1 of many potential mechanisms, but the proposal is underdeveloped and does not capture other important mitochondrial functions. I flesh out this proposal and argue that the overall efficiency of mitochondrial functioning is a core component of g; the most fundamental biological mechanism common to all brain and cognitive processes and that contributes to the relations among intelligence, health, and aging. The proposal integrates research on intelligence with models of the centrality of mitochondria to brain development and functioning, neurological diseases, and health more generally. Moreover, the combination of the maternal inheritance of mitochondrial DNA (mtDNA), the evolution of compensatory nuclear DNA, and the inability of evolutionary processes to purge deleterious mtDNA in males may contribute to the sex difference in variability in intelligence and in other cognitive domains. The proposal unifies many now disparate literatures and generates testable predictions for future studies. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

Journal ArticleDOI
TL;DR: A theory that explains how automatic control is possible in skilled typing, where thinking of a word automatically produces a rapid series of keystrokes is presented, and argues that control can be automatic and shows how it is possible.
Abstract: Experts act without thinking because their skill is hierarchical. A single conscious thought automatically produces a series of lower-level actions without top-down monitoring. This article presents a theory that explains how automatic control is possible in skilled typing, where thinking of a word automatically produces a rapid series of keystrokes. The theory assumes that keystrokes are selected by a context retrieval process that matches the current context to stored contexts and retrieves the key associated with the best match. The current context is generated by the typist's own actions. It represents the goal ("type DOG") and the motor commands for the keys struck so far. Top-down control is necessary to start typing. It sets the goal in the current context, which initiates the retrieval and updating processes, which continue without top-down control until the word is finished. The theory explains phenomena of hierarchical control in skilled typing, including differential loads on higher and lower levels of processing, the importance of words, and poor explicit knowledge of key locations and finger-to-key mappings. The theory is evaluated by fitting it to error corpora from 24 skilled typists and predicting error probabilities, magnitudes, and patterns. Some of the fits are quite good. The theory has implications beyond typing. It argues that control can be automatic and shows how it is possible. The theory extends to other sequential skills, like texting or playing music. It provides new insights into mechanisms of serial order in typing, speaking, and serial recall. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: After the scheme is explained and applied, it is contrasted with other, superficially similar schemes proposed in the literature-for example, those of Gergely and Csibra, Wellman and Gopnik, Perner and Roessler, Flavell, and Apperly and Butterfill.
Abstract: Among psychologists, it is widely thought that infants well under age 3, monkeys, apes, birds, and dogs have been shown to have rudimentary capacities for representing and attributing mental states or relations. I believe this view to be mistaken. It rests on overinterpreting experiments. It also often rests on assuming that one must choose between taking these individuals to be mentalists and taking them to be behaviorists. This assumption underestimates a powerful nonmentalistic, nonbehavioristic explanatory scheme that centers on attributing action with targets and on causation of action by interlocking, internal conative, and sensory states. Neither action with targets, nor conative states, nor sensing entails mentality. The scheme can attribute conative states and relations (to targets), efficiency, sensory states and relations (to sensed entities), sensory retention, sensory anticipation, affect, and appreciation of individual differences. The scheme can ground explanations of false belief tests that do not require infants or nonhuman animals to use language. After the scheme is explained and applied, it is contrasted with other, superficially similar schemes proposed in the literature-for example, those of Gergely and Csibra, Wellman and Gopnik, Perner and Roessler, Flavell, and Apperly and Butterfill. Better methods for testing are briefly discussed. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: This work proposes a new law that can both accommodate an initial delay resulting in a slower–faster–slower rate of learning, with either power or exponential forms as limiting cases, and which can account for not only mean RT but also the effect of practice on the entire distribution of RT.
Abstract: The "law of practice"-a simple nonlinear function describing the relationship between mean response time (RT) and practice-has provided a practically and theoretically useful way of quantifying the speed-up that characterizes skill acquisition. Early work favored a power law, but this was shown to be an artifact of biases caused by averaging over participants who are individually better described by an exponential law. However, both power and exponential functions make the strong assumption that the speedup always proceeds at a steadily decreasing rate, even though there are sometimes clear exceptions. We propose a new law that can both accommodate an initial delay resulting in a slower-faster-slower rate of learning, with either power or exponential forms as limiting cases, and which can account for not only mean RT but also the effect of practice on the entire distribution of RT. We evaluate this proposal with data from a broad array of tasks using hierarchical Bayesian modeling, which pools data across participants while minimizing averaging artifacts, and using inference procedures that take into account differences in flexibility among laws. In a clear majority of paradigms our results supported a delayed exponential law. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: A new diffusion model of decision making in continuous space is presented and tested that uses spatially continuously distributed Gaussian noise in the decision process (Gaussian process or Gaussian random field noise) to represent truly spatially continuous processes.
Abstract: A new diffusion model of decision making in continuous space is presented and tested. The model is a sequential sampling model in which both spatially continuously distributed evidence and noise are accumulated up to a decision criterion (a 1 dimensional [1D] line or a 2 dimensional [2D] plane). There are two major advances represented in this research. The first is to use spatially continuously distributed Gaussian noise in the decision process (Gaussian process or Gaussian random field noise) which allows the model to represent truly spatially continuous processes. The second is a series of experiments that collect data from a variety of tasks and response modes to provide the basis for testing the model. The model accounts for the distributions of responses over position and response time distributions for the choices. The model applies to tasks in which the stimulus and the response coincide (moving eyes or fingers to brightened areas in a field of pixels) and ones in which they do not (color, motion, and direction identification). The model also applies to tasks in which the response is made with eye movements, finger movements, or mouse movements. This modeling offers a wide potential scope of applications including application to any device or scale in which responses are made on a 1D continuous scale or in a 2D spatial field. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

Journal ArticleDOI
TL;DR: The model accounts for a variety of contextual illusions, reveals commonalities between seemingly disparate phenomena, and helps organize them into a novel taxonomy, and provides computational evidence for a novel canonical circuit that is shared across visual modalities.
Abstract: Context is known to affect how a stimulus is perceived A variety of illusions have been attributed to contextual processing-from orientation tilt effects to chromatic induction phenomena, but their neural underpinnings remain poorly understood Here, we present a recurrent network model of classical and extraclassical receptive fields that is constrained by the anatomy and physiology of the visual cortex A key feature of the model is the postulated existence of near- versus far- extraclassical regions with complementary facilitatory and suppressive contributions to the classical receptive field The model accounts for a variety of contextual illusions, reveals commonalities between seemingly disparate phenomena, and helps organize them into a novel taxonomy It explains how center-surround interactions may shift from attraction to repulsion in tilt effects, and from contrast to assimilation in induction phenomena The model further explains enhanced perceptual shifts generated by a class of patterned background stimuli that activate the two opponent extraclassical regions cooperatively Overall, the ability of the model to account for the variety and complexity of contextual illusions provides computational evidence for a novel canonical circuit that is shared across visual modalities (PsycINFO Database Record (c) 2018 APA, all rights reserved)

Journal ArticleDOI
TL;DR: It is concluded that the theory of Hilbert space multidimensional (HSM) theory is broadly applicable to measurement context effects found in the social and behavioral sciences.
Abstract: A general theory of measurement context effects, called Hilbert space multidimensional (HSM) theory, is presented. A measurement context refers to a subset of psychological variables that an individual evaluates on a particular occasion. Different contexts are formed by evaluating different but possibly overlapping subsets of variables. Context effects occur when the judgments across contexts cannot be derived from a single joint probability distribution over the complete set of values of the observed variables. HSM theory provides a way to model these context effects by using quantum probability theory, which represents all the variables within a low dimensional vector space. HSM models produce parameter estimates that provide a simple and informative interpretation of the complex collection of judgments across contexts. Comparisons of HSM model fits with Bayesian network model fits are reported for a new large experiment, demonstrating the viability of this new model. We conclude that the theory is broadly applicable to measurement context effects found in the social and behavioral sciences. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: It is argued that the prelimbic region exerts voluntary control over behavior via top-down modulation of stimulus–response pathways according to task demands, contextual cues, and how well a stimulus predicts an outcome.
Abstract: Theories of functioning in the medial prefrontal cortex are distinct across appetitively and aversively motivated procedures. In the appetitive domain, it is argued that the medial prefrontal cortex is important for producing adaptive behavior when circumstances change. This view advocates a role for this region in using higher-order information to bias performance appropriate to that circumstance. Conversely, literature born out of aversive studies has led to the theory that the prelimbic region of the medial prefrontal cortex is necessary for the expression of conditioned fear, whereas the infralimbic region is necessary for a decrease in responding following extinction. Here, the argument is that these regions are primed to increase or decrease fear responses and that this tendency is gated by subcortical inputs. However, we believe the data from aversive studies can be explained by a supraordinate role for the medial prefrontal cortex in behavioral flexibility, in line with the appetitive literature. Using a dichotomy between the voluntary control of behavior and the execution of well-trained responses, we attempt to reconcile these theories. We argue that the prelimbic region exerts voluntary control over behavior via top-down modulation of stimulus-response pathways according to task demands, contextual cues, and how well a stimulus predicts an outcome. Conversely, the infralimbic region promotes responding based on the strength of stimulus-response pathways determined by experience with reinforced contingencies. This system resolves the tension between executing voluntary actions sensitive to recent changes in contingencies, and responses that reflect the animal's experience across the long run. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

Journal ArticleDOI
TL;DR: A major generalization of extant Bayesian approaches to argumentation is presented that utilizes a new class of Bayesian learning methods that are better suited to modeling dynamic and conditional inferences than standard Bayesian conditionalization.
Abstract: According to the Bayesian paradigm in the psychology of reasoning, the norms by which everyday human cognition is best evaluated are probabilistic rather than logical in character. Recently, the Bayesian paradigm has been applied to the domain of argumentation, in which the fundamental norms are traditionally assumed to be logical. Here, we present a major generalization of extant Bayesian approaches to argumentation that (a) utilizes a new class of Bayesian learning methods that are better suited to modeling dynamic and conditional inferences than standard Bayesian conditionalization, (b) is able to characterize the special value of logically valid argument schemes in uncertain reasoning contexts, (c) greatly extends the range of inferences and argumentative phenomena that can be adequately described in a Bayesian framework, and (d) undermines some influential theoretical motivations for dual function models of human cognition. We conclude that the probabilistic norms given by the Bayesian approach to rationality are not necessarily at odds with the norms given by classical logic. Rather, the Bayesian theory of argumentation can be seen as justifying and enriching the argumentative norms of classical logic. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

Journal ArticleDOI
TL;DR: An extension to the parallel constraint satisfaction network model (iCodes: integrated coherence-based decision and search), which assumes—in contrast to prespecified search rules—that the valence of available information influences search of concealed information, is proposed.
Abstract: A common assumption of many established models for decision making is that information is searched according to some prespecified search rule. While the content of the information influences the termination of search, usually specified as a stopping rule, the direction of search is viewed as being independent of the valence of the retrieved information. We propose an extension to the parallel constraint satisfaction network model (iCodes: integrated coherence-based decision and search), which assumes-in contrast to prespecified search rules-that the valence of available information influences search of concealed information. Specifically, the model predicts an attraction search effect in that information search is directed toward the more attractive alternative given the available information. In 3 studies with participants choosing between two options based on partially revealed probabilistic information, the attraction search effect was consistently observed for environments with varying costs for information search although the magnitude of the effect decreased with decreasing monetary search costs. We also find the effect in reanalyses of 5 published studies. With iCodes, we propose a fully specified formal model and discuss implications for theory development within competing modeling frameworks. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

Journal ArticleDOI
TL;DR: The memory encoding cost (MEC) theory, which integrates attention and memory encoding to explain costs and benefits evoked by a spatial cue, simplifies the theoretical understanding of attentional effects by linking the attentional blink and some varieties of spatial cueing costs to a common mechanism.
Abstract: Spatial cueing is thought to indicate the resource limits of visual attention because invalidly cued items are reported more slowly and less accurately than validly cued items. However, limited resource accounts cannot explain certain findings, such as dividing attention without costs, or attentional benefits without invalidity costs. The current study presents a new account of exogenous cueing, namely the memory encoding cost (MEC) theory, which integrates attention and memory encoding to explain costs and benefits evoked by a spatial cue. Unlike conventional theories that focus on the role of attention in yielding spatial cueing effects, the MEC theory argues that some cueing effects are caused by a combination of attentional facilitation evoked by the cue, but also the cost of encoding the cue into memory. The crucial implication of this theory is that limitations in attentional deployment may not necessarily be the cause of invalidity costs. MEC generates a number of predictions that we test here, providing five convergent lines of evidence that cue encoding plays a key role in producing cueing effects. Furthermore, the MEC suggests a common mechanism underlying cueing costs and the attentional blink, and we simulate the core empirical findings of the current study with an existing attentional blink model. The model was able to simulate these effects primarily through manipulation of a single parameter that corresponds to memory encoding. The MEC theory thus simplifies our theoretical understanding of attentional effects by linking the attentional blink and some varieties of spatial cueing costs to a common mechanism. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

Journal ArticleDOI
TL;DR: A hierarchical neural network model, in which subpopulations of neurons develop fixed and regularly repeating temporal chains of spikes (polychronization), which respond specifically to randomized Poisson spike trains representing the input training images, leads to a new hypothesis concerning how information about visual features at every spatial scale may be projected upward through successive neuronal layers.
Abstract: We present a hierarchical neural network model, in which subpopulations of neurons develop fixed and regularly repeating temporal chains of spikes (polychronization), which respond specifically to randomized Poisson spike trains representing the input training images. The performance is improved by including top-down and lateral synaptic connections, as well as introducing multiple synaptic contacts between each pair of pre- and postsynaptic neurons, with different synaptic contacts having different axonal delays. Spike-timing-dependent plasticity thus allows the model to select the most effective axonal transmission delay between neurons. Furthermore, neurons representing the binding relationship between low-level and high-level visual features emerge through visually guided learning. This begins to provide a way forward to solving the classic feature binding problem in visual neuroscience and leads to a new hypothesis concerning how information about visual features at every spatial scale may be projected upward through successive neuronal layers. We name this hypothetical upward projection of information the "holographic principle." (PsycINFO Database Record

Journal ArticleDOI
TL;DR: An affordance management approach to understanding sexual prejudice is proposed, which weds the fundamental motives theory with the sociofunctional threat-based approach to prejudice to provide a broader explanation for the causes and outcomes of sexual prejudice and to explain inter- and intragroup prejudices more broadly.
Abstract: Stereotypes, prejudices, and discriminatory behaviors directed toward people based on their sexual orientation vary broadly. Existing perspectives on sexual prejudice argue for different underlying causes, sometimes provide disparate or conflicting evidence for its roots, and typically fail to account for variances observed across studies. We propose an affordance management approach to understanding sexual prejudice, which weds the fundamental motives theory with the sociofunctional threat-based approach to prejudice to provide a broader explanation for the causes and outcomes of sexual prejudice and to explain inter- and intragroup prejudices more broadly. Prejudices arise as specific emotions designed to engage functional behavioral responses to perceived threats and opportunities (i.e., affordances) posed by different sexual orientation groups, and interact with the perceiver's chronic or temporarily activated fundamental motives (e.g., parenting, mating), which determine the relevance of certain target affordances. Our perspective predicts what stereotype content is likely to direct specific affective and behavioral reactions (i.e., the stereotypes that relay threat- and opportunity-relevant information) and when the affordance-emotion-behavior link is likely to engage (i.e., when those threats and opportunities are directly relevant to the perceiver's current fundamental goal). This article synthesizes the extant sexual prejudice literature from an affordance management approach to demonstrate how fundamental goals interact with preexisting perceptions to drive perceptual, affective, and behavioral responses toward sexual orientation groups, and provides a degree of explanatory power heretofore missing from the prejudice literature. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

Journal ArticleDOI
TL;DR: The authors show that the magnitude of the exponent is an index of the attentional demands of memory formation and indexes the degree to which attention allocates resources to items in memory unequally rather than equally.
Abstract: The quality or precision of stimulus representations in visual working memory can be characterized by a power law, which states that precision decreases as a power of the number of items in memory, with an exponent whose magnitude typically varies in the range 0.5 to 0.75. The authors show that the magnitude of the exponent is an index of the attentional demands of memory formation. They report 5 visual working memory experiments with tasks using noisy, backward-masked stimuli that varied in their attentional demands and show that the magnitude of the exponent increases systematically with the attentional demands of the task. Recall accuracy in the experiments was well described by an attention-weighted sample-size model that views visual working memory as a resource comprised of noisy evidence samples that are recruited during stimulus exposure and which can be allocated flexibly under attentional control. The magnitude of the exponent indexes the degree to which attention allocates resources to items in memory unequally rather than equally. (PsycINFO Database Record