scispace - formally typeset
Search or ask a question

Showing papers in "Psychological Review in 2012"


Journal ArticleDOI
TL;DR: This framework describes how class-based contextualist and solipsistic tendencies shape the self, perceptions of the social environment, and relationships to other individuals and details 9 hypotheses and relevant empirical evidence concerning how class influences behavior.
Abstract: Social class is shaped by an individual’s material resources as well as perceptions of rank vis-a `-vis others in society, and in this article, we examine how class influences behavior. Diminished resources and lower rank create contexts that constrain social outcomes for lower-class individuals and enhance contextualist tendencies—that is, a focus on external, uncontrollable social forces and other individuals who influence one’s life outcomes. In contrast, abundant resources and elevated rank create contexts that enhance the personal freedoms of upper-class individuals and give rise to solipsistic social cognitive tendencies—that is, an individualistic focus on one’s own internal states, goals, motivations, and emotions. Guided by this framework, we detail 9 hypotheses and relevant empirical evidence concerning how class-based contextualist and solipsistic tendencies shape the self, perceptions of the social environment, and relationships to other individuals. Novel predictions and implications for research in other socio-political contexts are considered.

811 citations


Journal ArticleDOI
TL;DR: Possible causes of identity fusion--ranging from relatively distal, evolutionary, and cultural influences to more proximal, contextual influences--are discussed and possible effects on pro-group actions are mediated by perceptions of arousal and invulnerability.
Abstract: Identity fusion is a relatively unexplored form of alignment with groups that entails a visceral feeling of oneness with the group. This feeling is associated with unusually porous, highly permeable borders between the personal and social self. These porous borders encourage people to channel their personal agency into group behavior, raising the possibility that the personal and social self will combine synergistically to motivate pro-group behavior. Furthermore, the strong personal as well as social identities possessed by highly fused persons cause them to recognize other group members not merely as members of the group but also as unique individuals, prompting the development of strong relational as well as collective ties within the group. In local fusion, people develop relational ties to members of relatively small groups (e.g., families or work teams) with whom they have personal relationships. In extended fusion, people project relational ties onto relatively large collectives composed of many individuals with whom they may have no personal relationships. The research literature indicates that measures of fusion are exceptionally strong predictors of extreme pro-group behavior. Moreover, fusion effects are amplified by augmenting individual agency, either directly (by increasing physiological arousal) or indirectly (by activating personal or social identities). The effects of fusion on pro-group actions are mediated by perceptions of arousal and invulnerability. Possible causes of identity fusion—ranging from relatively distal, evolutionary, and cultural influences to more proximal, contextual influences—are discussed. Finally, implications and future directions are considered.

504 citations


Journal ArticleDOI
TL;DR: The entropy model of uncertainty (EMU), an integrative theoretical framework that applies the idea of entropy to the human information system to understand uncertainty-related anxiety, is proposed and is experienced subjectively as anxiety.
Abstract: Entropy, a concept derived from thermodynamics and information theory, describes the amount of uncertainty and disorder within a system. Self-organizing systems engage in a continual dialogue with the environment and must adapt themselves to changing circumstances to keep internal entropy at a manageable level. We propose the entropy model of uncertainty (EMU), an integrative theoretical framework that applies the idea of entropy to the human information system to understand uncertainty-related anxiety. Four major tenets of EMU are proposed: (a) Uncertainty poses a critical adaptive challenge for any organism, so individuals are motivated to keep it at a manageable level; (b) uncertainty emerges as a function of the conflict between competing perceptual and behavioral affordances; (c) adopting clear goals and belief structures helps to constrain the experience of uncertainty by reducing the spread of competing affordances; and (d) uncertainty is experienced subjectively as anxiety and is associated with activity in the anterior cingulate cortex and with heightened noradrenaline release. By placing the discussion of uncertainty management, a fundamental biological necessity, within the framework of information theory and self-organizing systems, our model helps to situate key psychological processes within a broader physical, conceptual, and evolutionary context.

405 citations


Journal ArticleDOI
TL;DR: An alternative in which referent selection is an online process and independent of long-term learning is presented, which suggests that association learning buttressed by dynamic competition can account for much of the literature and suggests more sophisticated ways of describing the interaction between situation- and developmental-time processes.
Abstract: Classic approaches to word learning emphasize referential ambiguity: In naming situations, a novel word could refer to many possible objects, properties, actions, and so forth. To solve this, researchers have posited constraints, and inference strategies, but assume that determining the referent of a novel word is isomorphic to learning. We present an alternative in which referent selection is an online process and independent of long-term learning. We illustrate this theoretical approach with a dynamic associative model in which referent selection emerges from real-time competition between referents and learning is associative (Hebbian). This model accounts for a range of findings including the differences in expressive and receptive vocabulary, cross-situational learning under high degrees of ambiguity, accelerating (vocabulary explosion) and decelerating (power law) learning, fast mapping by mutual exclusivity (and differences in bilinguals), improvements in familiar word recognition with development, and correlations between speed of processing and learning. Together it suggests that (a) association learning buttressed by dynamic competition can account for much of the literature; (b) familiar word recognition is subserved by the same processes that identify the referents of novel words (fast mapping); (c) online competition may allow the children to leverage information available in the task to augment performance despite slow learning; (d) in complex systems, associative learning is highly multifaceted; and (e) learning and referent selection, though logically distinct, can be subtly related. It suggests more sophisticated ways of describing the interaction between situation- and developmental-time processes and points to the need for considering such interactions as a primary determinant of development.

305 citations


Journal ArticleDOI
TL;DR: Evidence for local structure in memory search and patch depletion preceding dynamic local-to-global transitions between patches is found, and dynamic models significantly outperformed nondynamic models.
Abstract: Do humans search in memory using dynamic local-to-global search strategies similar to those that animals use to forage between patches in space? If so, do their dynamic memory search policies correspond to optimal foraging strategies seen for spatial foraging? Results from a number of fields suggest these possibilities, including the shared structure of the search problems-searching in patchy environments-and recent evidence supporting a domain-general cognitive search process. To investigate these questions directly, we asked participants to recover from memory as many animal names as they could in 3 min. Memory search was modeled over a representation of the semantic search space generated from the BEAGLE memory model of Jones and Mewhort (2007), via a search process similar to models of associative memory search (e.g., Raaijmakers & Shiffrin, 1981). We found evidence for local structure (i.e., patches) in memory search and patch depletion preceding dynamic local-to-global transitions between patches. Dynamic models also significantly outperformed nondynamic models. The timing of dynamic local-to-global transitions was consistent with optimal search policies in space, specifically the marginal value theorem (Charnov, 1976), and participants who were more consistent with this policy recalled more items.

280 citations


Journal ArticleDOI
TL;DR: It is proposed that recurrent similarity computation, a process that facilitates the discovery of higher-order relationships between a set of related experiences, expands the scope of classical exemplar-based models of memory and allows the hippocampus to support generalization through interactions that unfold within a dynamically created memory space.
Abstract: In this article, we present a perspective on the role of the hippocampal system in generalization, instantiated in a computational model called REMERGE (recurrency and episodic memory results in generalization). We expose a fundamental, but neglected, tension between prevailing computational theories that emphasize the function of the hippocampus in pattern separation (Marr, 1971; McClelland, McNaughton, & O’Reilly, 1995), and empirical support for its role in generalization and flexible relational memory (Cohen & Eichenbaum, 1993; Eichenbaum, 1999). Our account provides a means by which to resolve this conflict, by demonstrating that the basic representational scheme envisioned by complementary learning systems theory (McClelland et al., 1995), which relies upon orthogonalized codes in the hippocampus, is compatible with efficient generalization—as long as there is recurrence rather than unidirectional flow within the hippocampal circuit or, more widely, between the hippocampus and neocortex. We propose that recurrent similarity computation, a process that facilitates the discovery of higher-order relationships between a set of related experiences, expands the scope of classical exemplar-based models of memory (e.g., Nosofsky, 1984) and allows the hippocampus to support generalization through interactions that unfold within a dynamically created memory space.

274 citations


Journal ArticleDOI
TL;DR: A force-field theory of motivated cognition is presented and applied to a broad variety of phenomena in social judgment and self-regulation and has implications for choice of means to achieve one's cognitive goals as well as for successful goal attainment under specific force- field constellations.
Abstract: A force-field theory of motivated cognition is presented and applied to a broad variety of phenomena in social judgment and self-regulation. Purposeful cognitive activity is assumed to be propelled by a driving force and opposed by a restraining force. Potential driving force represents the maximal amount of energy an individual is prepared to invest in a cognitive activity. Effective driving force corresponds to the amount of energy he or she actually invests in attempt to match the restraining force. Magnitude of the potential driving force derives from a combination of goal importance and the pool of available mental resources, whereas magnitude of the restraining force derives from an individual's inclination to conserve resources, current task demands, and competing goals. The present analysis has implications for choice of means to achieve one's cognitive goals as well as for successful goal attainment under specific force-field constellations. Empirical evidence for these effects is considered, and the underlying theory's integrative potential is highlighted. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

258 citations


Journal ArticleDOI
Asher Koriat1
TL;DR: Simulation and empirical results suggest that response speed is a frugal cue for self-consistency, and its validity depends on the validity of self- Consistency in predicting performance.
Abstract: How do people monitor the correctness of their answers? A self-consistency model is proposed for the process underlying confidence judgments and their accuracy. In answering a 2-alternative question, participants are assumed to retrieve a sample of representations of the question and base their confidence on the consistency with which the chosen answer is supported across representations. Confidence is modeled by analogy to the calculation of statistical level of confidence (SLC) in testing hypotheses about a population and represents the participant's assessment of the likelihood that a new sample will yield the same choice. Assuming that participants draw representations from a commonly shared item-specific population of representations, predictions were derived regarding the function relating confidence to inter-participant consensus and intra-participant consistency for the more preferred (majority) and the less preferred (minority) choices. The predicted pattern was confirmed for several different tasks. The confidence-accuracy relationship was shown to be a by-product of the consistency-correctness relationship: It is positive because the answers that are consistently chosen are generally correct, but negative when the wrong answers tend to be favored. The overconfidence bias stems from the reliability-validity discrepancy: Confidence monitors reliability (or self-consistency), but its accuracy is evaluated in calibration studies against correctness. Simulation and empirical results suggest that response speed is a frugal cue for self-consistency, and its validity depends on the validity of self-consistency in predicting performance. Another mnemonic cue-accessibility, which is the overall amount of information that comes to mind-makes an added, independent contribution. Self-consistency and accessibility may correspond to the 2 parameters that affect SLC: sample variance and sample size.

221 citations


Journal ArticleDOI
TL;DR: The theoretical foundation of the sociocultural self model lays the groundwork for a more complete understanding of behavior and provides new tools for developing interventions that will reduce social class disparities in health and education.
Abstract: The literature on social class disparities in health and education contains 2 underlying, yet often opposed, models of behavior: the individual model and the structural model. These models refer to largely unacknowledged assumptions about the sources of human behavior that are foundational to research and interventions. Our review and theoretical integration proposes that, in contrast to how the 2 models are typically represented, they are not opposed, but instead they are complementary sets of understandings that inform and extend each other. Further, we elaborate the theoretical rationale and predictions for a third model: the sociocultural self model of behavior. This model incorporates and extends key tenets of the individual and structural models. First, the sociocultural self model conceptualizes individual characteristics (e.g., skills) and structural conditions (e.g., access to resources) as interdependent forces that mutually constitute each other and that are best understood together. Second, the sociocultural self model recognizes that both individual characteristics and structural conditions indirectly influence behavior through the selves that emerge in the situation. These selves are malleable psychological states that are a product of the ongoing mutual constitution of individuals and structures and serve to guide people's behavior by systematically shaping how people construe situations. The theoretical foundation of the sociocultural self model lays the groundwork for a more complete understanding of behavior and provides new tools for developing interventions that will reduce social class disparities in health and education. The model predicts that intervention efforts will be more effective at producing sustained behavior change when (a) current selves are congruent, rather than incongruent, with the desired behavior and (b) individual characteristics and structural conditions provide ongoing support for the selves that are necessary to support the desired behavior.

199 citations


Journal ArticleDOI
TL;DR: The basic proposal is that the brain, within an identifiable network of cortical and subcortical structures, implements a probabilistic generative model of reward, and that goal-directed decision making is effected through Bayesian inversion of this model.
Abstract: Recent work has given rise to the view that reward-based decision making is governed by two key controllers: a habit system, which stores stimulus-response associations shaped by past reward, and a goal-oriented system that selects actions based on their anticipated outcomes. The current literature provides a rich body of computational theory addressing habit formation, centering on temporal-difference learning mechanisms. Less progress has been made toward formalizing the processes involved in goal-directed decision making. We draw on recent work in cognitive neuroscience, animal conditioning, cognitive and developmental psychology, and machine learning to outline a new theory of goal-directed decision making. Our basic proposal is that the brain, within an identifiable network of cortical and subcortical structures, implements a probabilistic generative model of reward, and that goal-directed decision making is effected through Bayesian inversion of this model. We present a set of simulations implementing the account, which address benchmark behavioral and neuroscientific findings, and give rise to a set of testable predictions. We also discuss the relationship between the proposed framework and other models of decision making, including recent models of perceptual choice, to which our theory bears a direct connection.

184 citations


Journal ArticleDOI
TL;DR: A model of short-term memory and episodic memory is presented, with the core assumptions that people parse their continuous experience into episodic clusters and items are clustered together in memory as episodes by binding information within an episode to a common temporal context.
Abstract: A model of short-term memory and episodic memory is presented, with the core assumptions that (a) people parse their continuous experience into episodic clusters and (b) items are clustered together in memory as episodes by binding information within an episode to a common temporal context. Along with the additional assumption that information within a cluster is serially ordered, the model accounts for a number of phenomena from short-term memory (with a focus on serial recall) and episodic memory (with a focus on free recall). The model also accounts for the effects of aging on serial and free recall, apparent temporal isolation effects in short- and long-term memory, and the relation between individual differences in working memory and episodic memory performance.

Journal ArticleDOI
TL;DR: This paper showed that interactive activation and competition can indeed account for the complex pattern of reversals and revealed a core computational principle that determines whether neighbor effects are facilitative or inhibitory, and weakly active neighbors exert a net facilitative effect.
Abstract: One of the core principles of how the mind works is the graded, parallel activation of multiple related or similar representations. Parallel activation of multiple representations has been particularly important in the development of theories and models of language processing, where coactivated representations (neighbors) have been shown to exhibit both facilitative and inhibitory effects on word recognition and production. Researchers generally ascribe these effects to interactive activation and competition, but there is no unified explanation for why the effects are facilitative in some cases and inhibitory in others. We present a series of simulations of a simple domain-general interactive activation and competition model that is broadly consistent with more specialized domain-specific models of lexical processing. The results showed that interactive activation and competition can indeed account for the complex pattern of reversals. Critically, the simulations revealed a core computational principle that determines whether neighbor effects are facilitative or inhibitory: strongly active neighbors exert a net inhibitory effect, and weakly active neighbors exert a net facilitative effect.

Journal ArticleDOI
TL;DR: A model of differential amygdala activation in which the basolateral amygdala is underactive while the activity of the central amygdala is of average to above average levels is proposed to provide a more accurate and up-to-date account for the specific cognitive and emotional deficits found in psychopathy.
Abstract: This article introduces a novel hypothesis regarding amygdala function in psychopathy. The first part of this article introduces the concept of psychopathy and describes the main cognitive and affective impairments demonstrated by this population; that is, a deficit in fear-recognition, lower conditioned fear responses and poor performance in passive avoidance, and response-reversal learning tasks. Evidence for amygdala dysfunction in psychopathy is considered with regard to these deficits; however, the idea of unified amygdala function is untenable. A model of differential amygdala activation in which the basolateral amygdala (BLA) is underactive while the activity of the central amygdala (CeA) is of average to above average levels is proposed to provide a more accurate and up-to-date account for the specific cognitive and emotional deficits found in psychopathy. In addition, the model provides a mechanism by which attentional-based models and emotion-based models of psychopathy can coexist. Data to support the differential amygdala activation model are provided from studies from both human and animal research. Supporting evidence concerning some of the neurochemicals implicated in psychopathy is then reviewed. Implications of the model and areas of future research are discussed.

Journal ArticleDOI
TL;DR: A series of simulation studies and analyses are described designed to understand the different learning mechanisms posited by the 2 classes of models: hypothesis-testing and associative models, and their relation to each other.
Abstract: Both adults and young children possess powerful statistical computation capabilities—they can infer the referent of a word from highly ambiguous contexts involving many words and many referents by aggregating cross-situational statistical information across contexts. This ability has been explained by models of hypothesis testing and by models of associative learning. This article describes a series of simulation studies and analyses designed to understand the different learning mechanisms posited by the 2 classes of models and their relation to each other. Variants of a hypothesis-testing model and a simple or dumb associative mechanism were examined under different specifications of information selection, computation, and decision. Critically, these 3 components of the models interact in complex ways. The models illustrate a fundamental tradeoff between amount of data input and powerful computations: With the selection of more information, dumb associative models can mimic the powerful learning that is accomplished by hypothesis-testing models with fewer data. However, because of the interactions among the component parts of the models, the associative model can mimic various hypothesis-testing models, producing the same learning patterns but through different internal components. The simulations argue for the importance of a compositional approach to human statistical learning: the experimental decomposition of the processes that contribute to statistical learning in human learners and models with the internal components that can be evaluated independently and together.

Journal ArticleDOI
TL;DR: An ideal observer analysis of human VWM is developed by deriving the expected behavior of an optimally performing but limited-capacity memory system and offers a principled reinterpretation of existing models of VWM.
Abstract: Limits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this article we develop an ideal observer analysis of human VWM by deriving the expected behavior of an optimally performing but limitedcapacity memory system. This analysis is framed around rate–distortion theory, a branch of information theory that provides optimal bounds on the accuracy of information transmission subject to a fixed information capacity. The result of the ideal observer analysis is a theoretical framework that provides a task-independent and quantitative definition of visual memory capacity and yields novel predictions regarding human performance. These predictions are subsequently evaluated and confirmed in 2 empirical studies. Further, the framework is general enough to allow the specification and testing of alternative models of visual memory (e.g., how capacity is distributed across multiple items). We demonstrate that a simple model developed on the basis of the ideal observer analysis—one that allows variability in the number of stored memory representations but does not assume the presence of a fixed item limit—provides an excellent account of the empirical data and further offers a principled reinterpretation of existing models of VWM.

Journal ArticleDOI
TL;DR: This article used the E-Z Reader model of eye-movement control in reading to simulate eyemovement behavior in several reading tasks, including z-string reading, target-word search, and visual search of Landolt Cs arranged in both linear and circular arrays.
Abstract: Nonreading tasks that share some (but not all) of the task demands of reading have often been used to make inferences about how cognition influences when the eyes move during reading. In this article, we use variants of the E-Z Reader model of eye-movement control in reading to simulate eye-movement behavior in several of these tasks, including z-string reading, target-word search, and visual search of Landolt Cs arranged in both linear and circular arrays. These simulations demonstrate that a single computational framework is sufficient to simulate eye movements in both reading and nonreading tasks but also suggest that there are task-specific differences in both saccadic targeting (i.e., decisions about where to move the eyes) and the coupling between saccadic programming and the movement of attention (i.e., decisions about when to move the eyes). These findings suggest that some aspects of the eye-mind link are flexible and can be configured in a manner that supports efficient task performance.

Journal ArticleDOI
TL;DR: A model-based approach that allows capacity to be assessed despite other important processing contributions is advanced, starting with a psychological-process model of WM capacity developed to understand visual arrays and arriving at a more unified and complete model.
Abstract: Theories of working memory (WM) capacity limits will be more useful when we know what aspects of performance are governed by the limits and what aspects are governed by other memory mechanisms. Whereas considerable progress has been made on models of WM capacity limits for visual arrays of separate objects, less progress has been made in understanding verbal materials, especially when words are mentally combined to form multiword units or chunks. Toward a more comprehensive theory of capacity limits, we examined models of forced-choice recognition of words within printed lists, using materials designed to produce multiword chunks in memory (e.g., leather brief case). Several simple models were tested against data from a variety of list lengths and potential chunk sizes, with test conditions that only imperfectly elicited the interword associations. According to the most successful model, participants retained about 3 chunks on average in a capacity-limited region of WM, with some chunks being only subsets of the presented associative information (e.g., leather brief case retained with leather as one chunk and brief case as another). The addition to the model of an activated long-term memory component unlimited in capacity was needed. A fixed-capacity limit appears critical to account for immediate verbal recognition and other forms of WM. We advance a model-based approach that allows capacity to be assessed despite other important processing contributions. Starting with a psychological-process model of WM capacity developed to understand visual arrays, we arrive at a more unified and complete model.

Journal ArticleDOI
TL;DR: A computational model is developed that can accurately simulate lexical decision data from the lexicon projects in English, French, and Dutch, along with masked priming data that have been taken as evidence for specialized orthographic representations.
Abstract: The goal of research on how letter identity and order are perceived during reading is often characterized as one of "cracking the orthographic code." Here, we suggest that there is no orthographic code to crack: Words are perceived and represented as sequences of letters, just as in a dictionary. Indeed, words are perceived and represented in exactly the same way as other visual objects. The phenomena that have been taken as evidence for specialized orthographic representations can be explained by assuming that perception involves recovering information that has passed through a noisy channel: the early stages of visual perception. The noisy channel introduces uncertainty into letter identity, letter order, and even whether letters are present or absent. We develop a computational model based on this simple principle and show that it can accurately simulate lexical decision data from the lexicon projects in English, French, and Dutch, along with masked priming data that have been taken as evidence for specialized orthographic representations.

Journal ArticleDOI
TL;DR: A new modeling framework for recognition memory and repetition priming based on signal detection theory is presented and measures of overall model fit favored the SS model over the others, illustrating a new, formal approach to testing theories of explicit and implicit memory.
Abstract: We present a new modeling framework for recognition memory and repetition priming based on signal detection theory. We use this framework to specify and test the predictions of 4 models: (a) a single-system (SS) model, in which one continuous memory signal drives recognition and priming; (b) a multiple-systems-1 (MS1) model, in which completely independent memory signals (such as explicit and implicit memory) drive recognition and priming; (c) a multiple-systems-2 (MS2) model, in which there are also 2 memory signals, but some degree of dependence is allowed between these 2 signals (and this model subsumes the SS and MS1 models as special cases); and (d) a dual-process signal detection (DPSD1) model, 1 possible extension of a dual-process theory of recognition (Yonelinas, 1994) to priming, in which a signal detection model is augmented by an independent recollection process. The predictions of the models are tested in a continuous-identification-with-recognition paradigm in both normal adults (Experiments 1-3) and amnesic individuals (using data from Conroy, Hopkins, & Squire, 2005). The SS model predicted numerous results in advance. These were not predicted by the MS1 model, though could be accommodated by the more flexible MS2 model. Importantly, measures of overall model fit favored the SS model over the others. These results illustrate a new, formal approach to testing theories of explicit and implicit memory.

Journal ArticleDOI
TL;DR: A nonparametric statistic that is capable of simultaneously taking into account accuracy as well as RTs would be highly useful and developed for two important decisional stopping rules is developed.
Abstract: Measures of human efficiency under increases in mental workload or attentional limitations are vital in studying human perception, cognition, and action. Assays of efficiency as workload changes have typically been confined to either reaction times (RTs) or accuracy alone. Within the realm of RTs, a nonparametric measure called the workload capacity coefficient has been employed in many studies (Townsend & Nozawa, 1995). However, the contribution of correct versus incorrect responses has been unavailable in that context. A nonparametric statistic that is capable of simultaneously taking into account accuracy as well as RTs would be highly useful. This theoretical study develops such a tool for two important decisional stopping rules. Preliminary data from a simple visual identification study illustrate one potential application.

Journal ArticleDOI
TL;DR: This work introduces Bayesian analogy with relational transformations (BART) and applies the model to the task of learning first-order comparative relations from a set of animal pairs, providing a proof-of-concept that structured analogies can be solved with representations induced from unstructured feature vectors by mechanisms that operate in a largely bottom-up fashion.
Abstract: How can humans acquire relational representations that enable analogical inference and other forms of high-level reasoning? Using comparative relations as a model domain, we explore the possibility that bottom-up learning mechanisms applied to objects coded as feature vectors can yield representations of relations sufficient to solve analogy problems. We introduce Bayesian analogy with relational transformations (BART) and apply the model to the task of learning first-order comparative relations (e.g., larger, smaller, fiercer, meeker) from a set of animal pairs. Inputs are coded by vectors of continuousvalued features, based either on human magnitude ratings, normed feature ratings (De Deyne et al., 2008), or outputs of the topics model (Griffiths, Steyvers, & Tenenbaum, 2007). Bootstrapping from empirical priors, the model is able to induce first-order relations represented as probabilistic weight distributions, even when given positive examples only. These learned representations allow classification of novel instantiations of the relations and yield a symbolic distance effect of the sort obtained with both humans and other primates. BART then transforms its learned weight distributions by importance-guided mapping, thereby placing distinct dimensions into correspondence. These transformed representations allow BART to reliably solve 4-term analogies (e.g., larger:smaller::fiercer:meeker), a type of reasoning that is arguably specific to humans. Our results provide a proof-of-concept that structured analogies can be solved with representations induced from unstructured feature vectors by mechanisms that operate in a largely bottom-up fashion. We discuss potential implications for algorithmic and neural models of relational thinking, as well as for the evolution of abstract thought.

Journal ArticleDOI
TL;DR: This paper showed that laboratory choice behavior among stimuli of a classical "intransitivity" paradigm is, in fact, consistent with variable strict weak order preferences, and that decision makers act in accordance with a restrictive mathematical model that, for the behavioral sciences, is extraordinarily parsimonious.
Abstract: Theories of rational choice often make the structural consistency assumption that every decision maker's binary strict preference among choice alternatives forms a strict weak order. Likewise, the very concept of a utility function over lotteries in normative, prescriptive, and descriptive theory is mathematically equivalent to strict weak order preferences over those lotteries, while intransitive heuristic models violate such weak orders. Using new quantitative interdisciplinary methodologies, we dissociate the variability of choices from the structural inconsistency of preferences. We show that laboratory choice behavior among stimuli of a classical "intransitivity" paradigm is, in fact, consistent with variable strict weak order preferences. We find that decision makers act in accordance with a restrictive mathematical model that, for the behavioral sciences, is extraordinarily parsimonious. Our findings suggest that the best place to invest future behavioral decision research is not in the development of new intransitive decision models but rather in the specification of parsimonious models consistent with strict weak order(s), as well as heuristics and other process models that explain why preferences appear to be weakly ordered.

Journal ArticleDOI
TL;DR: The present study proposes and examines the multidimensional vector (MDV) model framework as a modeling schema for choice response times, demonstrating the adequacy of theframework as a general schema for modeling the latency of choice performance.
Abstract: The present study proposes and examines the multidimensional vector (MDV) model framework as a modeling schema for choice response times. MDV extends the Thurstonian model, as well as signal detection theory, to classification tasks by taking into account the influence of response properties on stimulus discrimination. It is capable of accounting for stimulus–response compatibility, which is known to be an influential task variable determining choice-reaction performance but has not been considered in previous mathematical modeling efforts. Specific MDV models were developed for 5 experiments using the Simon task, for which stimulus location is task irrelevant, to examine the validity of model assumptions and illustrate characteristic behaviors of model parameters. The MDV models accounted for the experimental data to a remarkable degree, demonstrating the adequacy of the framework as a general schema for modeling the latency of choice performance. Some modeling issues involved in the MDV model framework are discussed.

Journal ArticleDOI
TL;DR: The stochastic detection and retrieval model (SDRM) determined that a reduction in variance during the confidence process is the most likely explanation of the delayed-JOL effect, and a stronger relation between information underlying JOLs and recall is the strongest link.
Abstract: We present a signal detection-like model termed the stochastic detection and retrieval model (SDRM) for use in studying metacognition. Focusing on paradigms that relate retrieval (e.g., recall or recognition) and confidence judgments, the SDRM measures (1) variance in the retrieval process, (2) variance in the confidence process, (3) the extent to which different sources of information underlie each response, (4) simple bias (i.e., increasing or decreasing confidence criteria across conditions), and (5) metacognitive bias (i.e., contraction or expansion of the confidence criteria across conditions). In the metacognition literature, gamma correlations have been used to measure the accuracy of confidence judgments. However, gamma cannot distinguish between the first 3 attributes, and it cannot measure either form of bias. In contrast, the SDRM can distinguish among the attributes, and it can measure both forms of bias. In this way, the SDRM can be used to test competing process theories by determining the attribute that best accounts for a change across conditions. To demonstrate the SDRM's usefulness, we investigated judgments of learning (JOLs) followed by cued-recall. Through a series of nested and non-nested model comparisons applied to a new experiment, the SDRM determined that a reduction in variance during the confidence process is the most likely explanation of the delayed-JOL effect, and a stronger relation between information underlying JOLs and recall is the most likely explanation of the testing-JOL effect. Following a brief discussion of implications for JOL theories, we conclude with a broader discussion of how the SDRM can benefit metacognition research.

Journal ArticleDOI
TL;DR: The planning and control model (PCM) of motorvisual priming proposes that action planning binds categorical representations of action features so that their availability for perceptual processing is inhibited, so that the perception of categorically action-consistent stimuli is impaired during action planning.
Abstract: Previous research on dual-tasks has shown that, under some circumstances, actions impair the perception of action-consistent stimuli, whereas, under other conditions, actions facilitate the perception of action-consistent stimuli. We propose a new model to reconcile these contrasting findings. The planning and control model (PCM) of motorvisual priming proposes that action planning binds categorical representations of action features so that their availability for perceptual processing is inhibited. Thus, the perception of categorically action-consistent stimuli is impaired during action planning. Movement control processes, on the other hand, integrate multi-sensory spatial information about the movement and, therefore, facilitate perceptual processing of spatially movement-consistent stimuli. We show that the PCM is consistent with a wider range of empirical data than previous models on motorvisual priming. Furthermore, the model yields previously untested empirical predictions. We also discuss how the PCM relates to motorvisual research paradigms other than dual-tasks.

Journal ArticleDOI
TL;DR: This article argued that people share some sense of where the burden of social proof lies in situations where opinions or choices are in conflict, and suggested a family of models sharing two key parameters, one corresponding to the location of the influence threshold, and the other reflecting its clarity.
Abstract: Social influence rises with the number of influence sources, but the proposed relationship varies across theories, situations, and research paradigms. To clarify this relationship, I argue that people share some sense of where the “burden of social proof” lies in situations where opinions or choices are in conflict. This suggests a family of models sharing 2 key parameters, one corresponding to the location of the influence threshold, and the other reflecting its clarity—a factor that explains why discrete “tipping points” are not observed more frequently. The plausibility and implications of this account are examined using Monte Carlo and cellular automata simulations and the relative fit of competing models across classic data sets in the conformity, group deliberation, and social diffusion literatures.

Journal ArticleDOI
TL;DR: The authors proposed a neural network model of learning and processing the English past tense that is based on the notion that experience-dependent cortical development is a core aspect of cognitive development, and showed that a functional processing architecture develops through interactions between experiencedependent brain development and the structure of the environment, in this case, the statistical properties of verbs in the language.
Abstract: We present a neural network model of learning and processing the English past tense that is based on the notion that experience-dependent cortical development is a core aspect of cognitive development. During learning the model adds and removes units and connections to develop a task-specific final architecture. The model provides an integrated account of characteristic errors during learning the past tense, adult generalization to pseudoverbs, and dissociations between verbs observed after brain damage in aphasic patients. We put forward a theory of verb inflection in which a functional processing architecture develops through interactions between experience-dependent brain development and the structure of the environment, in this case, the statistical properties of verbs in the language. The outcome of this process is a structured processing system giving rise to graded dissociations between verbs that are easy and verbs that are hard to learn and process. In contrast to dual-mechanism accounts of inflection, we argue that describing dissociations as a dichotomy between regular and irregular verbs is a post hoc abstraction and is not linked to underlying processing mechanisms. We extend current single-mechanism accounts of inflection by highlighting the role of structural adaptation in development and in the formation of the adult processing system. In contrast to some single-mechanism accounts, we argue that the link between irregular inflection and verb semantics is not causal and that existing data can be explained on the basis of phonological representations alone. This work highlights the benefit of taking brain development seriously in theories of cognitive development. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

Journal ArticleDOI
TL;DR: It is shown that for both models, truncation may be avoided by assuming a baseline activity for each accumulator, which allows the LCA to approximate theDDM and the FFI to be identical to the DDM.
Abstract: In their influential Psychological Review article, Bogacz, Brown, Moehlis, Holmes, and Cohen (2006) discussed optimal decision making as accomplished by the drift diffusion model (DDM). The authors showed that neural inhibition models, such as the leaky competing accumulator model (LCA) and the feedforward inhibition model (FFI), can mimic the DDM and accomplish optimal decision making. Here we show that these conclusions depend on how the models handle negative activation values and (for the LCA) across-trial variability in response conservativeness. Negative neural activations are undesirable for both neurophysiological and mathematical reasons. However, when negative activations are truncated to 0, the equivalence to the DDM is lost. Simulations show that this concern has practical ramifications: the DDM generally outperforms truncated versions of the LCA and the FFI, and the parameter estimates from the neural models can no longer be mapped onto those of the DDM in a simple fashion. We show that for both models, truncation may be avoided by assuming a baseline activity for each accumulator. This solution allows the LCA to approximate the DDM and the FFI to be identical to the DDM.

Journal ArticleDOI
TL;DR: A neurodynamic model for Visual Selection and Awareness (ViSA), ViSA supports the view that neural representations for conscious access and visuo-spatial working memory are globally distributed and are based on recurrent interactions between perceptual and access control processors.
Abstract: Two separate lines of study have clarified the role of selectivity in conscious access to visual information. Both involve presenting multiple targets and distracters: one simultaneously in a spatially distributed fashion, the other sequentially at a single location. To understand their findings in a unified framework, we propose a neurodynamic model for Visual Selection and Awareness (ViSA). ViSA supports the view that neural representations for conscious access and visuo-spatial working memory are globally distributed and are based on recurrent interactions between perceptual and access control processors. Its flexible global workspace mechanisms enable a unitary account of a broad range of effects: It accounts for the limited storage capacity of visuo-spatial working memory, attentional cueing, and efficient selection with multi-object displays, as well as for the attentional blink and associated sparing and masking effects. In particular, the speed of consolidation for storage in visuo-spatial working memory in ViSA is not fixed but depends adaptively on the input and recurrent signaling. Slowing down of consolidation due to weak bottom-up and recurrent input as a result of brief presentation and masking leads to the attentional blink. Thus, ViSA goes beyond earlier 2-stage and neuronal global workspace accounts of conscious processing limitations.

Journal ArticleDOI
TL;DR: A reanalysis of Benjamin et al.'s (2009) data sets as well as the results from a new experimental method indicate that the different forms of criterion noise proposed in the recognition memory literature are of very low magnitudes, and they do not provide a significant improvement over the account already given by traditional SDT without criterion noise.
Abstract: Traditional approaches within the framework of signal detection theory (SDT; Green & Swets, 1966), especially in the field of recognition memory, assume that the positioning of response criteria is not a noisy process. Recent work (Benjamin, Diaz, & Wee, 2009; Mueller & Weidemann, 2008) has challenged this assumption, arguing not only for the existence of criterion noise but also for its large magnitude and substantive contribution to individuals' performance. A review of these recent approaches for the measurement of criterion noise in SDT identifies several shortcomings and confoundings. A reanalysis of Benjamin et al.'s (2009) data sets as well as the results from a new experimental method indicate that the different forms of criterion noise proposed in the recognition memory literature are of very low magnitudes, and they do not provide a significant improvement over the account already given by traditional SDT without criterion noise.