scispace - formally typeset
Search or ask a question

Showing papers in "Psychological Review in 2013"


Journal ArticleDOI
TL;DR: It is suggested that in addition to motivating pathogen avoidance, disgust evolved to regulate decisions in the domains of mate choice and morality and is recast into a framework that can generate new lines of empirical and theoretical inquiry.
Abstract: Interest in and research on disgust has surged over the past few decades. The field, however, still lacks a coherent theoretical framework for understanding the evolved function or functions of disgust. Here we present such a framework, emphasizing 2 levels of analysis: that of evolved function and that of information processing. Although there is widespread agreement that disgust evolved to motivate the avoidance of contact with disease-causing organisms, there is no consensus about the functions disgust serves when evoked by acts unrelated to pathogen avoidance. Here we suggest that in addition to motivating pathogen avoidance, disgust evolved to regulate decisions in the domains of mate choice and morality. For each proposed evolved function, we posit distinct information processing systems that integrate function-relevant information and account for the trade-offs required of each disgust system. By refocusing the discussion of disgust on computational mechanisms, we recast prior theorizing on disgust into a framework that can generate new lines of empirical and theoretical inquiry.

565 citations


Journal ArticleDOI
TL;DR: A neural circuit model informed by behavioral and electrophysiological data collected on various response inhibition paradigms is constructed that extends a well-established model of action selection in the basal ganglia by including a frontal executive control network that integrates information about sensory input and task rules to facilitate well-informed decision making via the oculomotor system.
Abstract: Planning and executing volitional actions in the face of conflicting habitual responses is a critical aspect of human behavior. At the core of the interplay between these 2 control systems lies an override mechanism that can suppress the habitual action selection process and allow executive control to take over. Here, we construct a neural circuit model informed by behavioral and electrophysiological data collected on various response inhibition paradigms. This model extends a well-established model of action selection in the basal ganglia by including a frontal executive control network that integrates information about sensory input and task rules to facilitate well-informed decision making via the oculomotor system. Our simulations of the anti-saccade, Simon, and saccade-override tasks ensue in conflict between a prepotent and controlled response that causes the network to pause action selection via projections to the subthalamic nucleus. Our model reproduces key behavioral and electrophysiological patterns and their sensitivity to lesions and pharmacological manipulations. Finally, we show how this network can be extended to include the inferior frontal cortex to simulate key qualitative patterns of global response inhibition demands as required in the stop-signal task.

343 citations


Journal ArticleDOI
TL;DR: A new context-task-set (C-TS) model is developed, inspired by nonparametric Bayesian methods, that suggests that participants spontaneously build task-set structure into a learning problem when not cued to do so, and shows that C-TS provides a good quantitative fit to human sequences of choices.
Abstract: Learning and executive functions such as task-switching share common neural substrates, notably prefrontal cortex and basal ganglia. Understanding how they interact requires studying how cognitive control facilitates learning but also how learning provides the (potentially hidden) structure, such as abstract rules or task-sets, needed for cognitive control. We investigate this question from 3 complementary angles. First, we develop a new context-task-set (C-TS) model, inspired by nonparametric Bayesian methods, specifying how the learner might infer hidden structure (hierarchical rules) and decide to reuse or create new structure in novel situations. Second, we develop a neurobiologically explicit network model to assess mechanisms of such structured learning in hierarchical frontal cortex and basal ganglia circuits. We systematically explore the link between these modeling levels across task demands. We find that the network provides an approximate implementation of high-level C-TS computations, with specific neural mechanisms modulating distinct C-TS parameters. Third, this synergism yields predictions about the nature of human optimal and suboptimal choices and response times during learning and task-switching. In particular, the models suggest that participants spontaneously build task-set structure into a learning problem when not cued to do so, which predicts positive and negative transfer in subsequent generalization tests. We provide experimental evidence for these predictions and show that C-TS provides a good quantitative fit to human sequences of choices. These findings implicate a strong tendency to interactively engage cognitive control and learning, resulting in structured abstract representations that afford generalization opportunities and, thus, potentially long-term rather than short-term optimality.

330 citations


Journal ArticleDOI
TL;DR: The goal conflict model of eating is presented, a new perspective that attributes the difficulty of chronic dieters in regulating their food intake to a conflict between 2 incompatible goals-namely, eating enjoyment and weight control.
Abstract: Theories of eating regulation often attribute overweight to a malfunction of homeostatic regulation of body weight. With the goal conflict model of eating, we present a new perspective that attributes the difficulty of chronic dieters (i.e., restrained eaters) in regulating their food intake to a conflict between 2 incompatible goals—namely, eating enjoyment and weight control. This model explains the findings of previous research and provides novel insights into the psychological mechanism responsible for both dietary failure and success. According to this model, although chronic dieters are motivated to pursue their weight control goal, they often fail in food-rich environments because they are surrounded by palatable food cues that strongly prime the goal of eating enjoyment. Due to the incompatibility of the eating enjoyment goal and the weight control goal, such increase in the activation of the eating enjoyment goal results in (a) an inhibition of the cognitive representation of the weight control goal and (b) preferential processing of palatable food stimuli. Both these processes interfere with the effective pursuit of the weight control goal and facilitate unhealthy eating. However, there is a minority of restrained eaters for whom, most likely due to past success in exerting self-control, tasty high-calorie food has become associated with weight control thoughts. For them, exposure to palatable food increases the accessibility of the weight control goal, enabling them to control their body weight in food-rich environments. Evidence for these proposed psychological mechanisms is provided, and implications for interventions are discussed.

262 citations


Journal ArticleDOI
TL;DR: The motivational basis for dimensional comparisons, their integration with recent social cognitive approaches, and the interdependence of dimensional, temporal, and social comparisons are discussed.
Abstract: Although social comparison (Festinger, 1954) and temporal comparison (Albert, 1977) theories are well established, dimensional comparison is a largely neglected yet influential process in self-evaluation. Dimensional comparison entails a single individual comparing his or her ability in a (target) domain with his or her ability in a standard domain (e.g., "How good am I in math compared with English?"). This article reviews empirical findings from introspective, path-analytic, and experimental studies on dimensional comparisons, categorized into 3 groups according to whether they address the "why," "with what," or "with what effect" question. As the corresponding research shows, dimensional comparisons are made in everyday life situations. They impact on domain-specific self-evaluations of abilities in both domains: Dimensional comparisons reduce self-concept in the worse off domain and increase self-concept in the better off domain. The motivational basis for dimensional comparisons, their integration with recent social cognitive approaches, and the interdependence of dimensional, temporal, and social comparisons are discussed.

248 citations


Journal ArticleDOI
TL;DR: The primitive elements theory of cognitive skills is presented, which makes it possible to construct detailed process models of 2 classic transfer studies in the literature and produces better fits of the amount of transfer than Singley and Anderson's (1985) identical productions model.
Abstract: This article presents the primitive elements theory of cognitive skills. The central idea is that skills are broken down into primitive information processing elements that move and compare single pieces of information regardless of the specific content of this information. Several of these primitive elements are necessary for even a single step in a task. A learning process therefore combines the elements in increasingly larger, but still context-independent, units. If there is overlap between tasks, this means the larger units learned for 1 task can be reused for the other task, producing transfer. The theory makes it possible to construct detailed process models of 2 classic transfer studies in the literature: a study of transfer in text editors and 1 in arithmetic. I show that the approach produces better fits of the amount of transfer than Singley and Anderson's (1985) identical productions model. The theory also offers explanations for far transfer, in which the 2 tasks have no surface characteristics in common, which I demonstrate with 2 models in the domain of cognitive control, where training on either task-switching or working memory control led to an improvement of performance on other control tasks. The theory can therefore help evaluate the effectiveness of cognitive training that has the goal to improve general cognitive abilities.

223 citations


Journal ArticleDOI
TL;DR: A probabilistic model of change detection is presented that attempts to bridge the gap by formalizing the role of perceptual organization and allowing for richer, more structured memory representations.
Abstract: When remembering a real-world scene, people encode both detailed information about specific objects and higher order information like the overall gist of the scene. However, formal models of change detection, like those used to estimate visual working memory capacity, assume observers encode only a simple memory representation that includes no higher order structure and treats items independently from one another. We present a probabilistic model of change detection that attempts to bridge this gap by formalizing the role of perceptual organization and allowing for richer, more structured memory representations. Using either standard visual working memory displays or displays in which the items are purposefully arranged in patterns, we find that models that take into account perceptual grouping between items and the encoding of higher order summary information are necessary to account for human change detection performance. Considering the higher order structure of items in visual working memory will be critical for models to make useful predictions about observers' memory capacity and change detection abilities in simple displays as well as in more natural scenes.

174 citations


Journal ArticleDOI
TL;DR: This framework emphasizes that children are active in selecting evidence (both social and experiential), rather than being passive recipients of knowledge, and motivates further studies that more systematically examine the process of learning from social information.
Abstract: Children's causal learning has been characterized as a rational process, in which children appropriately evaluate evidence from their observations and actions in light of their existing conceptual knowledge. We propose a similar framework for children's selective social learning, concentrating on information learned from others' testimony. We examine how children use their existing conceptual knowledge of the physical and social world to determine the reliability of testimony. We describe existing studies that offer both direct and indirect support for selective trust as rational inference and discuss how this framework may resolve some of the conflicting evidence surrounding cases of indiscriminate trust. Importantly, this framework emphasizes that children are active in selecting evidence (both social and experiential), rather than being passive recipients of knowledge, and motivates further studies that more systematically examine the process of learning from social information.

168 citations


Journal ArticleDOI
TL;DR: A Bayesian model is used to illustrate how feedback from segmented words might constrain phonetic category learning by providing information about which sounds occur together in words, and demonstrates that word-level information can successfully disambiguate overlapping English vowel categories.
Abstract: Infants segment words from fluent speech during the same period when they are learning phonetic categories, yet accounts of phonetic category acquisition typically ignore information about the words in which sounds appear. We use a Bayesian model to illustrate how feedback from segmented words might constrain phonetic category learning by providing information about which sounds occur together in words. Simulations demonstrate that word-level information can successfully disambiguate overlapping English vowel categories. Learning patterns in the model are shown to parallel human behavior from artificial language learning tasks. These findings point to a central role for the developing lexicon in phonetic category acquisition and provide a framework for incorporating top-down constraints into models of category learning.

159 citations


Journal ArticleDOI
TL;DR: A theory of multi-alternative, multi-attribute preferential choice that provides a unitary explanation for a large range of choice-set-dependent behaviors, including context effects, alignability effects, and less is more effects is presented.
Abstract: This paper presents a theory of multi-alternative, multi-attribute preferential choice. It is assumed that the associations between an attribute and an available alternative impact the attribute’s accessibility. The values of highly accessible attributes are more likely to be aggregated into preferences. Altering the choice task by adding new alternatives or by increasing the salience of preexisting alternatives can change the accessibility of the underlying attributes and subsequently bias choice. This mechanism is formalized by use of a preference accumulation decision process, embedded in a feed-forward neural network. The resulting model provides a unitary explanation for a large range of choice-set-dependent behaviors, including context effects, alignability effects, and less is more effects. The model also generates a gain–loss asymmetry relative to the reference point, without explicit loss aversion. This asymmetry accounts for all of the reference-dependent anomalies explained by loss aversion, as well as referencedependent phenomena not captured by loss aversion.

152 citations


Journal ArticleDOI
TL;DR: The combined computational and empirical findings provide support for the notion that decisional processes are intrinsically competitive and that this competition is likely to kick in at a late, rather than early, processing stage.
Abstract: A multitude of models have been proposed to account for the neural mechanism of value integration and decision making in speeded decision tasks. While most of these models account for existing data, they largely disagree on a fundamental characteristic of the choice mechanism: independent versus different types of competitive processing. Five models, an independent race model, 2 types of input competition models (normalized race and feed-forward inhibition [FFI]) and 2 types of response competition models (max-minus-next [MMN] diffusion and leaky competing accumulators [LCA]) were compared in 3 combined computational and experimental studies. In each study, difficulty was manipulated in a way that produced qualitatively distinct predictions from the different classes of models. When parameters were constrained by the experimental conditions to avoid mimicking, simulations demonstrated that independent models predict speedups in response time with increased difficulty, while response competition models predict the opposite. Predictions of input-competition models vary between specific models and experimental conditions. Taken together, the combined computational and empirical findings provide support for the notion that decisional processes are intrinsically competitive and that this competition is likely to kick in at a late (response), rather than early (input), processing stage.

Journal ArticleDOI
TL;DR: The confidence model presents a coherent account of confidence judgments and response time that cannot be explained with currently popular signal detection theory analyses or dual-process models of recognition.
Abstract: Confidence in judgments is a fundamental aspect of decision making, and tasks that collect confidence judgments are an instantiation of multiple-choice decision making. We present a model for confidence judgments in recognition memory tasks that uses a multiple-choice diffusion decision process with separate accumulators of evidence for the different confidence choices. The accumulator that first reaches its decision boundary determines which choice is made. Five algorithms for accumulating evidence were compared, and one of them produced proportions of responses for each of the choices and full response time distributions for each choice that closely matched empirical data. With this algorithm, an increase in the evidence in one accumulator is accompanied by a decrease in the others so that the total amount of evidence in the system is constant. Application of the model to the data from an earlier experiment (Ratcliff, McKoon, & Tindall, 1994) uncovered a relationship between the shapes of z-transformed receiver operating characteristics and the behavior of response time distributions. Both are explained in the model by the behavior of the decision boundaries. For generality, we also applied the decision model to a 3-choice motion discrimination task and found it accounted for data better than a competing class of models. The confidence model presents a coherent account of confidence judgments and response time that cannot be explained with currently popular signal detection theory analyses or dual-process models of recognition.

Journal ArticleDOI
TL;DR: A new theoretical account of retrieval-induced forgetting (RIF) is presented together with new experimental evidence that fits this account and challenges the dominant inhibition account, and the role of context in remembering is emphasized.
Abstract: We present a new theoretical account of retrieval-induced forgetting (RIF) together with new experimental evidence that fits this account and challenges the dominant inhibition account. RIF occurs when the retrieval of some material from memory produces later forgetting of related material. The inhibition account asserts that RIF is the result of an inhibition mechanism that acts during retrieval to suppress the representations of interfering competitors. This inhibition is enduring, such that the suppressed material is difficult to access on a later test and is, therefore, recalled more poorly than baseline material. Although the inhibition account is widely accepted, a growing body of research challenges its fundamental assumptions. Our alternative account of RIF instead emphasizes the role of context in remembering. According to this context account, both of 2 tenets must be met for RIF to occur: (a) A context change must occur between study and subsequent retrieval practice, and (b) the retrieval practice context must be the active context during the final test when testing practiced categories. The results of 3 experiments, which directly test the divergent predictions of the 2 accounts, support the context account but cannot be explained by the inhibition account. In an extensive discussion, we survey the literature on RIF and apply our context account to the key findings, demonstrating the explanatory power of context.

Journal ArticleDOI
TL;DR: 2 computational models of behavioral priming that implement 3 mechanisms (psychological, cultural, and biological) as a unified explanation of such effects are presented and how they integrate previous theoretical accounts of priming phenomena is discussed.
Abstract: The priming of concepts has been shown to influence peoples’ subsequent actions, often unconsciously. We propose 3 mechanisms (psychological, cultural, and biological) as a unified explanation of such effects. (a) Primed concepts influence holistic representations of situations by parallel constraint satisfaction. (b) The constraints among representations stem from culturally shared affective meanings of concepts acquired in socialization. (c) Patterns of activity in neural populations act as semantic pointers linking symbolic concepts to underlying emotional and sensorimotor representations and thereby causing action. We present 2 computational models of behavioral priming that implement the proposed mechanisms. One is a localist neural network that connects primes with behaviors through central nodes simulating affective meanings. In a series of simulations, where the input is based on empirical data, we show that this model can explain a wide variety of experimental findings related to automatic social behavior. The second, neurocomputational model simulates spiking patterns in populations of biologically realistic neurons. We use this model to demonstrate how the proposed mechanisms can be implemented in the brain. Finally, we discuss how our models integrate previous theoretical accounts of priming phenomena. We also examine the interactions of psychological, cultural, and biological mechanisms in the control of automatic social behavior.

Journal ArticleDOI
TL;DR: It is concluded that delay-of-gratification failure, generally viewed as a manifestation of limited self-control capacity, can instead arise as an adaptive response to the perceived statistics of one's environment.
Abstract: An important category of seemingly maladaptive decisions involves failure to postpone gratification. A person pursuing a desirable long-run outcome may abandon it in favor of a short-run alternative that has been available all along. Here we present a theoretical framework in which this seemingly irrational behavior emerges from stable preferences and veridical judgments. Our account recognizes that decision makers generally face uncertainty regarding the time at which future outcomes will materialize. When timing is uncertain, the value of persistence depends crucially on the nature of a decision maker's prior temporal beliefs. Certain forms of temporal beliefs imply that a delay's predicted remaining length increases as a function of time already waited. In this type of situation, the rational, utility-maximizing strategy is to persist for a limited amount of time and then give up. We show empirically that people's explicit predictions of remaining delay lengths indeed increase as a function of elapsed time in several relevant domains, implying that temporal judgments offer a rational basis for limiting persistence. We then develop our framework into a simple working model and show how it accounts for individual differences in a laboratory task (the well-known "marshmallow test"). We conclude that delay-of-gratification failure, generally viewed as a manifestation of limited self-control capacity, can instead arise as an adaptive response to the perceived statistics of one's environment.

Journal ArticleDOI
TL;DR: A novel variant of Atkinson and Shiffrin's buffer model within the framework of the retrieving effectively from memory theory (REM) that accounts for findings previously thought to be difficult for such models to explain is described.
Abstract: Atkinson and Shiffrin’s (1968) dual-store model of memory includes structural aspects of memory along with control processes. The rehearsal buffer is a process by which items are kept in mind and long-term episodic traces are formed. The model has been both influential and controversial. Here, we describe a novel variant of Atkinson and Shiffrin’s buffer model within the framework of the retrieving effectively from memory theory (REM; Shiffrin & Steyvers, 1997) that accounts for findings previously thought to be difficult for such models to explain. This model assumes a limited-capacity buffer where information is stored about items, along with information about associations between items and between items and the context in which they are studied. The strength of association between items and context is limited by the number of items simultaneously occupying the buffer (Lehman & Malmberg, 2009). The contents of the buffer are managed by complementary processes of rehearsal and compartmentalization (Lehman & Malmberg, 2011). New findings that directly test a priori predictions of the model are reported, including serial position effects and conditional and first recall probabilities in immediate and delayed free recall, in a continuous distractor paradigm, and in experiments using list-length manipulations of single-item and paired-item study lists.

Journal ArticleDOI
TL;DR: In this article, Tversky this article proposed a quantum approach to similarity, where similarity judgments were shown to violate symmetry and the triangle inequality and also be subject to context effects, so that the same pair of items would be rated differently depending on the presence of other items.
Abstract: No other study has had as great an impact on the development of the similarity literature as that of Tversky (1977), which provided compelling demonstrations against all the fundamental assumptions of the popular, and extensively employed, geometric similarity models. Notably, similarity judgments were shown to violate symmetry and the triangle inequality and also be subject to context effects, so that the same pair of items would be rated differently, depending on the presence of other items. Quantum theory provides a generalized geometric approach to similarity and can address several of Tversky's main findings. Similarity is modeled as quantum probability, so that asymmetries emerge as order effects, and the triangle equality violations and the diagnosticity effect can be related to the context-dependent properties of quantum probability. We so demonstrate the promise of the quantum approach for similarity and discuss the implications for representation theory in general.

Journal ArticleDOI
TL;DR: The authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolboxes to be rigorously tested against competing theories.
Abstract: Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.

Journal ArticleDOI
TL;DR: A family of mixed-state, discrete-slots models for explaining choice and RTs in tasks of visual WM change detection are formalized, providing much better qualitative and quantitative accounts of the RT and choice data than do the shared-resources models.
Abstract: Much recent research has aimed to establish whether visual working memory (WM) is better characterized by a limited number of discrete all-or-none slots or by a continuous sharing of memory resources. To date, however, researchers have not considered the response-time (RT) predictions of discrete-slots versus shared-resources models. To complement the past research in this field, we formalize a family of mixed-state, discrete-slots models for explaining choice and RTs in tasks of visual WM change detection. In the tasks under investigation, a small set of visual items is presented, followed by a test item in 1 of the studied positions for which a change judgment must be made. According to the models, if the studied item in that position is retained in 1 of the discrete slots, then a memory-based evidence-accumulation process determines the choice and the RT; if the studied item in that position is missing, then a guessing-based accumulation process operates. Observed RT distributions are therefore theorized to arise as probabilistic mixtures of the memory-based and guessing distributions. We formalize an analogous set of continuous shared-resources models. The model classes are tested on individual subjects with both qualitative contrasts and quantitative fits to RT-distribution data. The discrete-slots models provide much better qualitative and quantitative accounts of the RT and choice data than do the shared-resources models, although there is some evidence for "slots plus resources" when memory set size is very small.

Journal ArticleDOI
TL;DR: It is concluded that sequential effects offer a powerful means for uncovering representations and learning mechanisms in animal learning, and research using sequential effects to determine mental representations is discussed.
Abstract: Binary choice tasks, such as 2-alternative forced choice, show a complex yet consistent pattern of sequential effects, whereby responses and response times depend on the detailed pattern of prior stimuli going back at least 5 trials. We show this pattern is well explained by simultaneous incremental learning of 2 simple statistics of the trial sequence: the base rate and the repetition rate. Both statistics are learned by the same basic associative mechanism, but they contribute different patterns of sequential effects because they entail different representations of the trial sequence. Subtler aspects of the data that are not explained by these 2 learning processes alone are explained by their interaction, via learning from joint error correction. Specifically, the cue-competition mechanism that has explained classic findings in animal learning (e.g., blocking) appears to operate on learning of sequence statistics. We also find that learning of the base rate and repetition rate are dissociated into response and stimulus processing, respectively, as indicated by event-related potentials, manipulations of stimulus discriminability, and reanalysis of past experiments that eliminated stimuli or prior responses. Thus, sequential effects in these tasks appear to be driven by learning the response base rate and the stimulus repetition rate. Connections are discussed between these findings and previous research attempting to separate stimulus- and response-based sequential effects, and research using sequential effects to determine mental representations. We conclude that sequential effects offer a powerful means for uncovering representations and learning mechanisms.

Journal ArticleDOI
TL;DR: The model with the standard assumptions was fit to predictions generated with the alternative assumptions, and the results showed that the recovered parameter values matched the values used to generate the predictions with only a few exceptions.
Abstract: If the diffusion model (Ratcliff & McKoon, 2008) is to account for the relative speeds of correct responses and errors, it is necessary that the components of processing identified by the model vary across the trials of a task. In standard applications, the rate at which information is accumulated by the diffusion process is assumed to be normally distributed across trials, the starting point for the process is assumed to be uniformly distributed across trials, and the time taken by processes outside the diffusion process is assumed to be uniformly distributed. With the studies in this article, I explore the consequences of alternative assumptions about the distributions, using a wide range of parameter values. The model with the standard assumptions was fit to predictions generated with the alternative assumptions, and the results showed that the recovered parameter values matched the values used to generate the predictions with only a few exceptions. These occurred when parameter combinations were extreme and when a skewed distribution (exponential) of nondecision times was used. The conclusion is that the standard model is robust to moderate changes in the across-trial distributions of parameter values.

Journal ArticleDOI
TL;DR: The SARKAE theory provides a framework within which models for various tasks can be developed; the way this could operate is illustrated, and the verbal descriptions of the theory are made more precise with a simplified simulation model applied to the results.
Abstract: We present a theoretical framework and a simplified simulation model for the co-evolution of knowledge and event memory, both termed SARKAE (Storing and Retrieving Knowledge and Events). Knowledge is formed through the accrual of individual events, a process that operates in tandem with the storage of individual event memories. In 2 studies, new knowledge about Chinese characters is trained over several weeks, different characters receiving differential training, followed by tests of episodic recognition memory, pseudo-lexical decision, and forced-choice perceptual identification. The large effects of training frequency in both studies demonstrated an important role of pure frequency in addition to differential context and differential similarity. The SARKAE theory provides a framework within which models for various tasks can be developed; we illustrate the way this could operate, and we make the verbal descriptions of the theory more precise with a simplified simulation model applied to the results.

Journal ArticleDOI
TL;DR: A spiking neural network model is presented that self-tunes to critical branching and, in doing so, simulates observed scaling laws as pervasive to neural and behavioral activity.
Abstract: It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical branching and, in doing so, simulates observed scaling laws as pervasive to neural and behavioral activity. These scaling laws are related to neural and cognitive functions, in that critical branching is shown to yield spiking activity with maximal memory and encoding capacities when analyzed using reservoir computing techniques. The model is also shown to account for findings of pervasive 1/f scaling in speech and cued response behaviors that are difficult to explain by isolable causes. Issues and questions raised by the model and its results are discussed from the perspectives of physics, neuroscience, computer and information sciences, and psychological and cognitive sciences.

Journal ArticleDOI
TL;DR: It is argued that the success of the MAX model of visual search and spatial cuing, the distractor homogeneity effect, the double-target detection deficit, and the redundancy costs in the post-stimulus probe task are all manifestations of an underlying competitive VSTM selection process, which arise as a natural consequence of the theory.
Abstract: We generalize the integrated system model of Smith and Ratcliff (2009) to obtain a new theory of attentional selection in brief, multielement visual displays. The theory proposes that attentional selection occurs via competitive interactions among detectors that signal the presence of task-relevant features at particular display locations. The outcome of the competition, together with attention, determines which stimuli are selected into visual short-term memory (VSTM). Decisions about the contents of VSTM are made by a diffusion-process decision stage. The selection process is modeled by coupled systems of shunting equations, which perform gated where-on-what pathway VSTM selection. The theory provides a computational account of key findings from attention tasks with near-threshold stimuli. These are (a) the success of the MAX model of visual search and spatial cuing, (b) the distractor homogeneity effect, (c) the double-target detection deficit, (d) redundancy costs in the post-stimulus probe task, (e) the joint item and information capacity limits of VSTM, and (f) the object-based nature of attentional selection. We argue that these phenomena are all manifestations of an underlying competitive VSTM selection process, which arise as a natural consequence of our theory.

Journal ArticleDOI
TL;DR: A rational analysis of the learning problem facing individuals in uncertain decision environments demonstrates that an unbiased learner would adopt melioration as the optimal response strategy for maximizing long-term gain.
Abstract: Melioration— defined as choosing a lesser, local gain over a greater longer term gain—is a behavioral tendency that people and pigeons share. As such, the empirical occurrence of meliorating behavior has frequently been interpreted as evidence that the mechanisms of human choice violate the norms of economic rationality. In some environments, the relationship between actions and outcomes is known. In this case, the rationality of choice behavior can be evaluated in terms of how successfully it maximizes utility given knowledge of the environmental contingencies. In most complex environments, however, the relationship between actions and future outcomes is uncertain and must be learned from experience. When the difficulty of this learning challenge is taken into account, it is not evident that melioration represents suboptimal choice behavior. In the present article, we examine human performance in a sequential decision-making experiment that is known to induce meliorating behavior. In keeping with previous results using this paradigm, we find that the majority of participants in the experiment fail to adopt the optimal decision strategy and instead demonstrate a significant bias toward melioration. To explore the origins of this behavior, we develop a rational analysis (Anderson, 1990) of the learning problem facing individuals in uncertain decision environments. Our analysis demonstrates that an unbiased learner would adopt melioration as the optimal response strategy for maximizing long-term gain. We suggest that many documented cases of melioration can be reinterpreted not as irrational choice but rather as globally optimal choice under uncertainty.

Journal ArticleDOI
TL;DR: This work proposes an alternative theory of detection in which perceptual decisions develop from maximum-likelihood decoding of a neurophysiologically inspired model of population activity in primary visual cortex, and demonstrates that this theory explains a broad range of classic detection results.
Abstract: Pattern detection is the bedrock of modern vision science. Nearly half a century ago, psychophysicists advocated a quantitative theoretical framework that connected visual pattern detection with its neurophysiological underpinnings. In this theory, neurons in primary visual cortex constitute linear and independent visual channels whose output is linked to choice behavior in detection tasks via simple read-out mechanisms. This model has proven remarkably successful in accounting for threshold vision. It is fundamentally at odds, however, with current knowledge about the neurophysiological underpinnings of pattern vision. In addition, the principles put forward in the model fail to generalize to suprathreshold vision or perceptual tasks other than detection. We propose an alternative theory of detection in which perceptual decisions develop from maximum-likelihood decoding of a neurophysiologically inspired model of population activity in primary visual cortex. We demonstrate that this theory explains a broad range of classic detection results. With a single set of parameters, our model can account for several summation, adaptation, and uncertainty effects, thereby offering a new theoretical interpretation for the vast psychophysical literature on pattern detection.

Journal ArticleDOI
TL;DR: A conceptual model is proposed that explains how gene-environment correlations and the multiplier effect function in the context of social development in individuals with autism to provide a more in-depth understanding of how the effects of certain genetic variants can be multiplied by the environment to cause largely phenotypic individual differences.
Abstract: A conceptual model is proposed that explains how gene-environment correlations and the multiplier effect function in the context of social development in individuals with autism. The review discusses the current state of autism genetic research, including its challenges, such as the genetic and phenotypic heterogeneity of the disorder, and its limitations, such as the lack of interdisciplinary work between geneticists and social scientists. We discuss literature on gene-environment correlations in the context of social development and draw implications for individuals with autism. The review expands upon genes, behaviors, types of environmental exposure, and exogenous variables relevant to social development in individuals on the autism spectrum, and explains these factors in the context of the conceptual model to provide a more in-depth understanding of how the effects of certain genetic variants can be multiplied by the environment to cause largely phenotypic individual differences. Using the knowledge gathered from gene-environment correlations and the multiplier effect, we outline novel intervention directions and implications.

Journal ArticleDOI
TL;DR: A computational framework is presented that can be used to define models that flexibly construct feature representations for a set of observed objects, based on nonparametric Bayesian statistics and two possible methods for capturing the manner that categorization affects feature representations.
Abstract: Representations are a key explanatory device used by cognitive psychologists to account for human behavior. Understanding the effects of context and experience on the representations people use is essential, because if two people encode the same stimulus using different representations, their response to that stimulus may be different. We present a computational framework that can be used to define models that flexibly construct feature representations (where by a feature we mean a part of the image of an object) for a set of observed objects, based on nonparametric Bayesian statistics. Austerweil and Griffiths (2011) presented an initial model constructed in this framework that captures how the distribution of parts affects the features people use to represent a set of objects. We build on this work in three ways. First, although people use features that can be transformed on each observation (e.g., translate on the retinal image), many existing feature learning models can only recognize features that are not transformed (occur identically each time). Consequently, we extend the initial model to infer features that are invariant over a set of transformations, and learn different structures of dependence between feature transformations. Second, we compare two possible methods for capturing the manner that categorization affects feature representations. Finally, we present a model that learns features incrementally, capturing an effect of the order of object presentation on the features people learn. We conclude by considering the implications and limitations of our empirical and theoretical results.

Journal ArticleDOI
J. J. McDowell1
TL;DR: Findings support the assertion that the world of behavior the authors observe and measure is generated by evolutionary dynamics, which generates instantaneous dynamics and patterns of preference change in constantly changing environments that are consistent with the dynamics of live-organism behavior.
Abstract: The idea that behavior is selected by its consequences in a process analogous to organic evolution has been discussed for over 100 years. A recently proposed theory instantiates this idea by means of a genetic algorithm that operates on a population of potential behaviors. Behaviors in the population are represented by numbers in decimal integer (phenotypic) and binary bit string (genotypic) forms. One behavior from the population is emitted at random each time tick, after which a new population of potential behaviors is constructed by recombining parent behavior bit strings. If the emitted behavior produced a benefit to the organism, then parents are chosen on the basis of their phenotypic similarity to the emitted behavior; otherwise, they are chosen at random. After parent behavior recombination, the population is subjected to a small amount of mutation by flipping random bits in the population's bit strings. The behavior generated by this process of selection, reproduction, and mutation reaches equilibrium states that conform to every empirically valid equation of matching theory, exactly and without systematic error. These equations are known to describe the behavior of many vertebrate species, including humans, in a variety of experimental, naturalistic, natural, and social environments. The evolutionary theory also generates instantaneous dynamics and patterns of preference change in constantly changing environments that are consistent with the dynamics of live-organism behavior. These findings support the assertion that the world of behavior we observe and measure is generated by evolutionary dynamics.

Journal ArticleDOI
TL;DR: This article examines how a new procedure called approximate Bayesian computation (ABC), a method for Bayesian analysis that circumvents the evaluation of the likelihood, can be used to fit computational models to memory data.
Abstract: Many influential memory models are computational in the sense that their predictions are derived through simulation. This means that it is difficult or impossible to write down a probability distribution or likelihood that characterizes the random behavior of the data as a function of the model's parameters. In turn, the lack of a likelihood means that these models cannot be directly fitted to data using traditional techniques. In particular, standard Bayesian analyses of such models are impossible. In this article, we examine how a new procedure called approximate Bayesian computation (ABC), a method for Bayesian analysis that circumvents the evaluation of the likelihood, can be used to fit computational models to memory data. In particular, we investigate the bind cue decide model of episodic memory (Dennis & Humphreys, 2001) and the retrieving effectively from memory model (Shiffrin & Steyvers, 1997). We fit hierarchical versions of each model to the data of Dennis, Lee, and Kinnell (2008) and Kinnell and Dennis (2012). The ABC analysis permits us to explore the relationships between the parameters in each model as well as evaluate their relative fits to data-analyses that were not previously possible.