scispace - formally typeset
Search or ask a question

Showing papers in "Psychological Review in 2014"


Journal ArticleDOI
TL;DR: It is argued that deficits in executive functioning, theory of mind, and central coherence can all be understood as the consequence of a core deficit in the flexibility with which people with autism spectrum disorder can process violations to their expectations.
Abstract: There have been numerous attempts to explain the enigma of autism, but existing neurocognitive theories often provide merely a refined description of 1 cluster of symptoms. Here we argue that deficits in executive functioning, theory of mind, and central coherence can all be understood as the consequence of a core deficit in the flexibility with which people with autism spectrum disorder can process violations to their expectations. More formally we argue that the human mind processes information by making and testing predictions and that the errors resulting from violations to these predictions are given a uniform, inflexibly high weight in autism spectrum disorder. The complex, fluctuating nature of regularities in the world and the stochastic and noisy biological system through which people experience it require that, in the real world, people not only learn from their errors but also need to (meta-)learn to sometimes ignore errors. Especially when situations (e.g., social) or stimuli (e.g., faces) become too complex or dynamic, people need to tolerate a certain degree of error in order to develop a more abstract level of representation. Starting from an inability to flexibly process prediction errors, a number of seemingly core deficits become logically secondary symptoms. Moreover, an insistence on sameness or the acting out of stereotyped and repetitive behaviors can be understood as attempts to provide a reassuring sense of predictive success in a world otherwise filled with error. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

542 citations


Journal ArticleDOI
TL;DR: A general race model is presented that extends the independent race model to account for the role of choice in go and stop processes, and a special race model that assumes each runner is a stochastic accumulator governed by a diffusion process is applied.
Abstract: Response inhibition is an important act of control in many domains of psychology and neuroscience. It is often studied in a stop-signal task that requires subjects to inhibit an ongoing action in response to a stop signal. Performance in the stop-signal task is understood as a race between a go process that underlies the action and a stop process that inhibits the action. Responses are inhibited if the stop process finishes before the go process. The finishing time of the stop process is not directly observable; a mathematical model is required to estimate its duration. Logan and Cowan (1984) developed an independent race model that is widely used for this purpose. We present a general race model that extends the independent race model to account for the role of choice in go and stop processes, and a special race model that assumes each runner is a stochastic accumulator governed by a diffusion process. We apply the models to 2 data sets to test assumptions about selective influence of capacity limitations on drift rates and strategies on thresholds, which are largely confirmed. The model provides estimates of distributions of stop-signal response times, which previous models could not estimate. We discuss implications of viewing cognitive control as the result of a repertoire of acts of control tailored to different tasks and situations.

517 citations


Journal ArticleDOI
TL;DR: A novel algorithmic model expanding the classical actor-critic architecture to include fundamental interactive properties of neural circuit models, incorporating both incentive and learning effects into a single theoretical framework simultaneously captures documented effects of dopamine on both learning and choice incentive.
Abstract: The striatal dopaminergic system has been implicated in reinforcement learning (RL), motor performance, and incentive motivation. Various computational models have been proposed to account for each of these effects individually, but a formal analysis of their interactions is lacking. Here we present a novel algorithmic model expanding the classical actor-critic architecture to include fundamental interactive properties of neural circuit models, incorporating both incentive and learning effects into a single theoretical framework. The standard actor is replaced by a dual opponent actor system representing distinct striatal populations, which come to differentially specialize in discriminating positive and negative action values. Dopamine modulates the degree to which each actor component contributes to both learning and choice discriminations. In contrast to standard frameworks, this model simultaneously captures documented effects of dopamine on both learning and choice incentive-and their interactions-across a variety of studies, including probabilistic RL, effort-based choice, and motor skill learning.

324 citations


Journal ArticleDOI
TL;DR: This work considers every possible combination of previously proposed answers to the individual questions of mnemonic precision in working memory and finds strong evidence against all 6 documented models.
Abstract: Three questions have been prominent in the study of visual working memory limitations: (a) What is the nature of mnemonic precision (e.g., quantized or continuous)? (b) How many items are remembered? (c) To what extent do spatial binding errors account for working memory failures? Modeling studies have typically focused on comparing possible answers to a single one of these questions, even though the result of such a comparison might depend on the assumed answers to both others. Here, we consider every possible combination of previously proposed answers to the individual questions. Each model is then a point in a 3-factor model space containing a total of 32 models, of which only 6 have been tested previously. We compare all models on data from 10 delayed-estimation experiments from 6 laboratories (for a total of 164 subjects and 131,452 trials). Consistently across experiments, we find that (a) mnemonic precision is not quantized but continuous and not equal but variable across items and trials; (b) the number of remembered items is likely to be variable across trials, with a mean of 6.4 in the best model (median across subjects); (c) spatial binding errors occur but explain only a small fraction of responses (16.5% at set size 8 in the best model). We find strong evidence against all 6 documented models. Our results demonstrate the value of factorial model comparison in working memory.

255 citations


Journal ArticleDOI
TL;DR: The multiattribute linear ballistic accumulator (MLBA) is proposed, a new dynamic model that provides a quantitative account of all 3 major context effects and makes a new prediction about the relationship between deliberation time and the magnitude of the similarity effect, which is confirmed experimentally.
Abstract: Context effects occur when a choice between 2 options is altered by adding a 3rd alternative. Three major context effects--similarity, compromise, and attraction--have wide-ranging implications across applied and theoretical domains, and have driven the development of new dynamic models of multiattribute and multialternative choice. We propose the multiattribute linear ballistic accumulator (MLBA), a new dynamic model that provides a quantitative account of all 3 context effects. Our account applies not only to traditional paradigms involving choices among hedonic stimuli, but also to recent demonstrations of context effects with nonhedonic stimuli. Because of its computational tractability, the MLBA model is more easily applied than previous dynamic models. We show that the model also accounts for a range of other phenomena in multiattribute, multialternative choice, including time pressure effects, and that it makes a new prediction about the relationship between deliberation time and the magnitude of the similarity effect, which we confirm experimentally.

198 citations


Journal ArticleDOI
TL;DR: A more focused understanding of homeostasis and allostasis is provided by explaining how both play a role in physiological regulation, and a critical analysis of regulation suggests how homeostotic conditions can be distinguished.
Abstract: Homeostasis, the dominant explanatory framework for physiological regulation, has undergone significant revision in recent years, with contemporary models differing significantly from the original formulation. Allostasis, an alternative view of physiological regulation, goes beyond its homeostatic roots, offering novel insights relevant to our understanding and treatment of several chronic health conditions. Despite growing enthusiasm for allostasis, the concept remains diffuse, due in part to ambiguity as to how the term is understood and used, impeding meaningful translational and clinical research on allostasis. Here we provide a more focused understanding of homeostasis and allostasis by explaining how both play a role in physiological regulation, and a critical analysis of regulation suggests how homeostasis and allostasis can be distinguished. Rather than focusing on changes in the value of a regulated variable (e.g., body temperature, body adiposity, or reward), research investigating the activity and relationship among the multiple regulatory loops that influence the value of these regulated variables may be the key to distinguishing homeostasis and allostasis. The mechanisms underlying physiological regulation and dysregulation are likely to have important implications for health and disease.

189 citations


Journal ArticleDOI
TL;DR: A signal-detection-based model of eyewitness identification is proposed, one that encourages the use of (and helps to conceptualize) receiver operating characteristic (ROC) analysis to measure discriminability, and a diagnostic feature-detector hypothesis is proposed to account for that result.
Abstract: The theoretical understanding of eyewitness identifications made from a police lineup has long been guided by the distinction between absolute and relative decision strategies. In addition, the accuracy of identifications associated with different eyewitness memory procedures has long been evaluated using measures like the diagnosticity ratio (the correct identification rate divided by the false identification rate). Framed in terms of signal-detection theory, both the absolute/relative distinction and the diagnosticity ratio are mainly relevant to response bias while remaining silent about the key issue of diagnostic accuracy, or discriminability (i.e., the ability to tell the difference between innocent and guilty suspects in a lineup). Here, we propose a signal-detection-based model of eyewitness identification, one that encourages the use of (and helps to conceptualize) receiver operating characteristic (ROC) analysis to measure discriminability. Recent ROC analyses indicate that the simultaneous presentation of faces in a lineup yields higher discriminability than the presentation of faces in isolation, and we propose a diagnostic feature-detection hypothesis to account for that result. According to this hypothesis, the simultaneous presentation of faces allows the eyewitness to appreciate that certain facial features (viz., those that are shared by everyone in the lineup) are non-diagnostic of guilt. To the extent that those non-diagnostic features are discounted in favor of potentially more diagnostic features, the ability to discriminate innocent from guilty suspects will be enhanced.

134 citations


Journal ArticleDOI
TL;DR: It is suggested that incorporating research insights from non-Western sociocultural contexts can significantly enhance attitude theorizing and an additional model-a normative-contextual model of attitudes is proposed.
Abstract: Attitudes, theorized as behavioral guides, have long been a central focus of research in the social sciences. However, this theorizing reflects primarily Western philosophical views and empirical findings emphasizing the centrality of personal preferences. As a result, the prevalent psychological model of attitudes is a person-centric one. We suggest that incorporating research insights from non-Western sociocultural contexts can significantly enhance attitude theorizing. To this end, we propose an additional model-a normative-contextual model of attitudes. The currently dominant person-centric model emphasizes the centrality of personal preferences, their stability and internal consistency, and their possible interaction with externally imposed norms. In contrast, the normative-contextual model emphasizes that attitudes are always context-contingent and incorporate the views of others and the norms of the situation. In this model, adjustment to norms does not involve an effortful struggle between the authentic self and exogenous forces. Rather, it is the ongoing and reassuring integration of others' views into one's attitudes. According to the normative-contextual model, likely to be a good fit in contexts that foster interdependence and holistic thinking, attitudes need not be personal or necessarily stable and internally consistent and are only functional to the extent that they help one to adjust automatically to different contexts. The fundamental shift in focus offered by the normative-contextual model generates novel hypotheses and highlights new measurement criteria for studying attitudes in non-Western sociocultural contexts. We discuss these theoretical and measurement implications as well as practical implications for health and well-being, habits and behavior change, and global marketing. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

117 citations


Journal ArticleDOI
TL;DR: This work presents a new overarching theoretical account as an alternative-one that can simultaneously account for prior findings, generate new predictions, and encompass a wide range of phenomena that underscores that the relationship between affect and cognition is not fixed but, instead, is highly malleable.
Abstract: Despite decades of research demonstrating a dedicated link between positive and negative affect and specific cognitive processes, not all research is consistent with this view. We present a new overarching theoretical account as an alternative-one that can simultaneously account for prior findings, generate new predictions, and encompass a wide range of phenomena. According to our proposed affect-as-cognitive-feedback account, affective reactions confer value on accessible information processing strategies (e.g., global vs. local processing) and other responses, goals, concepts, and thoughts that happen to be accessible at the time. This view underscores that the relationship between affect and cognition is not fixed but, instead, is highly malleable. That is, the relationship between affect and cognitive processing can be altered, and often reversed, by varying the mental context in which it is experienced. We present evidence that supports this account, along with implications for specific affective states and other subjective experiences. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

116 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that people do not use the rules of probability theory when reasoning about probability but instead use heuristics, which sometimes yield reasonable judgments and sometimes yield systematic biases.
Abstract: The systematic biases seen in people’s probability judgments are typically taken as evidence that people do not use the rules of probability theory when reasoning about probability but instead use heuristics, which sometimes yield reasonable judgments and sometimes yield systematic biases. This view has had a major impact in economics, law, medicine, and other fields; indeed, the idea that people cannot reason with probabilities has become a truism. We present a simple alternative to this view, where people reason about probability according to probability theory but are subject to random variation or noise in the reasoning process. In this account the effect of noise is canceled for some probabilistic expressions. Analyzing data from 2 experiments, we find that, for these expressions, people’s probability judgments are strikingly close to those required by probability theory. For other expressions, this account produces systematic deviations in probability estimates. These deviations explain 4 reliable biases in human probabilistic reasoning (conservatism, subadditivity, conjunction, and disjunction fallacies). These results suggest that people’s probability judgments embody the rules of probability theory and that biases in those judgments are due to the effects of random noise.

100 citations


Journal ArticleDOI
TL;DR: It is found that only a few simple properties of empirical data are necessary predictions of the diffusion and LBA models, and the predictions are translated into the Grice modeling framework, in which accumulation processes are deterministic and thresholds are random variables.
Abstract: [Correction Notice: An Erratum for this article was reported in Vol 121(1) of Psychological Review (see record 2014-03591-005). The link to supplemental material was missing. All versions of this article have been corrected.] Much current research on speeded choice utilizes models in which the response is triggered by a stochastic process crossing a deterministic threshold. This article focuses on 2 such model classes, 1 based on continuous-time diffusion and the other on linear ballistic accumulation (LBA). Both models assume random variability in growth rates and in other model components across trials. We show that if the form of this variability is unconstrained, the models can exactly match any possible pattern of response probabilities and response time distributions. Thus, the explanatory or predictive content of these models is determined not by their structural assumptions but, rather, by distributional assumptions (e.g., Gaussian distributions) that are traditionally regarded as implementation details. Selective influence assumptions (i.e., which experimental manipulations affect which model parameters) are shown to have no restrictive effect, except for the theoretically questionable assumption that speed-accuracy instructions do not affect growth rates. The 2nd contribution of this article concerns translation of falsifiable models between universal modeling languages. Specifically, we translate the predictions of the diffusion and LBA models (with their parametric and selective influence assumptions intact) into the Grice modeling framework, in which accumulation processes are deterministic and thresholds are random variables. The Grice framework is also known to reproduce any possible pattern of response probabilities and times, and hence it can be used as a common language for comparing models. It is found that only a few simple properties of empirical data are necessary predictions of the diffusion and LBA models.

Journal ArticleDOI
TL;DR: Building on prior relevant conceptions that include, among others, animal learning models and personality approaches, a general theory of motivational readiness is presented and the concept of incentive is conceptualized in terms of a Match between the contents of the Want and perceived situational affordances.
Abstract: The construct of motivational readiness is introduced and explored. Motivational readiness is the willingness or inclination, whether or not ultimately realized, to act in the service of a desire. Building on prior relevant conceptions that include, among others, animal learning models (Hull, 1943; Spence, 1956; Tolman, 1955) and personality approaches (e.g., Atkinson, 1964; Lewin, 1935), a general theory of motivational readiness is presented. Major parameters of this theory include the magnitude of a Want state (i.e., individual's desire of some sort) and the Expectancy of being able to satisfy it. The Want is assumed to be the essential driver of readiness: Whereas some degree of readiness may exist in the absence of Expectancy, all readiness is abolished in the absence of desire (Want). The concept of incentive is conceptualized in terms of a Match between the contents of the Want and perceived situational affordances. Whereas in classic models incentive was portrayed as a first-order determinant of motivational readiness, it is depicted here as a second-order factor that affects readiness via its impact on the Want and/or the Expectancy. A heterogeneous body of evidence for the present theory is reviewed, converging from different domains of psychological research. The theory's relation to its predecessors and its unique implications for new research hypotheses are also discussed.

Journal ArticleDOI
TL;DR: By linking stereopsis to a generic perceptual attribute, rather than a specific cue, it provides a potentially more unified account of the variation of stereopsis in real scenes and pictures and a basis for understanding why the authors can perceive depth in pictures despite conflicting visual signals.
Abstract: Humans can obtain an unambiguous perception of depth and 3-dimensionality with 1 eye or when viewing a pictorial image of a 3-dimensional scene. However, the perception of depth when viewing a real scene with both eyes is qualitatively different: There is a vivid impression of tangible solid form and immersive negative space. This perceptual phenomenon, referred to as "stereopsis," has been among the central puzzles of perception since the time of da Vinci. After Wheatstone's invention of the stereoscope in 1838, stereopsis has conventionally been explained as a byproduct of binocular vision or visual parallax. However, this explanation is challenged by the observation that the impression of stereopsis can be induced in single pictures under monocular viewing. Here I propose an alternative hypothesis that stereopsis is a qualitative visual experience related to the perception of egocentric spatial scale. Specifically, the primary phenomenal characteristic of stereopsis (the impression of "real" separation in depth) is proposed to be linked to the precision with which egocentrically scaled depth (absolute depth) is derived. Since conscious awareness of this precision could help guide the planning of motor action, the hypothesis provides a functional account for the important secondary phenomenal characteristics associated with stereopsis: the impression of interactability and realness. By linking stereopsis to a generic perceptual attribute, rather than a specific cue, it provides a potentially more unified account of the variation of stereopsis in real scenes and pictures and a basis for understanding why we can perceive depth in pictures despite conflicting visual signals.

Journal ArticleDOI
TL;DR: The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points, and suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making.
Abstract: We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making.

Journal ArticleDOI
TL;DR: A dynamical framework for sequential sensorimotor behavior based on the sequential composition of basic behavioral units is outlined, illustrated with a functional architecture for handwriting as proof of concept and the implications of the framework for motor control are discussed.
Abstract: We outline a dynamical framework for sequential sensorimotor behavior based on the sequential composition of basic behavioral units. Basic units are conceptualized as temporarily existing low-dimensional dynamical objects, or structured flows, emerging from a high-dimensional system, referred to as structured flows on manifolds. Theorems from dynamical system theory allow for the unambiguous classification of behaviors as represented by structured flows, and thus provide a means to define and identify basic units. The ensemble of structured flows available to an individual defines his or her dynamical repertoire. We briefly review experimental evidence that has identified a few basic elements likely to contribute to each individual's repertoire. Complex behavior requires the involvement of a (typically high-dimensional) dynamics operating at a time scale slower than that of the elements in the dynamical repertoire. At any given time, in the competition between units of the repertoire, the slow dynamics temporarily favor the dominance of one element over others in a sequential fashion, binding together the units and generating complex behavior. The time scale separation between the elements of the repertoire and the slow dynamics define a time scale hierarchy, and their ensemble defines a functional architecture. We illustrate the approach with a functional architecture for handwriting as proof of concept and discuss the implications of the framework for motor control.

Journal ArticleDOI
TL;DR: This model provides the first comprehensive computational account of the effects of stimulus factors on compound generalization, including spatial and temporal contiguity between components, which have posed long-standing problems for rational theories of associative and causal learning.
Abstract: How do we apply learning from one situation to a similar, but not identical, situation? The principles governing the extent to which animals and humans generalize what they have learned about certain stimuli to novel compounds containing those stimuli vary depending on a number of factors. Perhaps the best studied among these factors is the type of stimuli used to generate compounds. One prominent hypothesis is that different generalization principles apply depending on whether the stimuli in a compound are similar or dissimilar to each other. However, the results of many experiments cannot be explained by this hypothesis. Here, we propose a rational Bayesian theory of compound generalization that uses the notion of consequential regions, first developed in the context of rational theories of multidimensional generalization, to explain the effects of stimulus factors on compound generalization. The model explains a large number of results from the compound generalization literature, including the influence of stimulus modality and spatial contiguity on the summation effect, the lack of influence of stimulus factors on summation with a recovered inhibitor, the effect of spatial position of stimuli on the blocking effect, the asymmetrical generalization decrement in overshadowing and external inhibition, and the conditions leading to a reliable external inhibition effect. By integrating rational theories of compound and dimensional generalization, our model provides the first comprehensive computational account of the effects of stimulus factors on compound generalization, including spatial and temporal contiguity between components, which have posed long-standing problems for rational theories of associative and causal learning.

Journal ArticleDOI
TL;DR: It is shown that in tonal space, melodies are grouped by their tonal rather than timbral properties, whereas the reverse is true for the periodicity pitch representation.
Abstract: Listeners' expectations for melodies and harmonies in tonal music are perhaps the most studied aspect of music cognition. Long debated has been whether faster response times (RTs) to more strongly primed events (in a music theoretic sense) are driven by sensory or cognitive mechanisms, such as repetition of sensory information or activation of cognitive schemata that reflect learned tonal knowledge, respectively. We analyzed over 300 stimuli from 7 priming experiments comprising a broad range of musical material, using a model that transforms raw audio signals through a series of plausible physiological and psychological representations spanning a sensory-cognitive continuum. We show that RTs are modeled, in part, by information in periodicity pitch distributions, chroma vectors, and activations of tonal space--a representation on a toroidal surface of the major/minor key relationships in Western tonal music. We show that in tonal space, melodies are grouped by their tonal rather than timbral properties, whereas the reverse is true for the periodicity pitch representation. While tonal space variables explained more of the variation in RTs than did periodicity pitch variables, suggesting a greater contribution of cognitive influences to tonal expectation, a stepwise selection model contained variables from both representations and successfully explained the pattern of RTs across stimulus categories in 4 of the 7 experiments. The addition of closure--a cognitive representation of a specific syntactic relationship--succeeded in explaining results from all 7 experiments. We conclude that multiple representational stages along a sensory-cognitive continuum combine to shape tonal expectations in music.

Journal ArticleDOI
TL;DR: The structure induction model of diagnostic reasoning takes into account the uncertainty regarding the underlying causal structure and confirms that diagnostic judgments should not only reflect the empirical probability of cause given effect but should also depend on the reasoner's beliefs about the existence and strength of the link between cause and effect.
Abstract: Our research examines the normative and descriptive adequacy of alternative computational models of diagnostic reasoning from single effects to single causes. Many theories of diagnostic reasoning are based on the normative assumption that inferences from an effect to its cause should reflect solely the empirically observed conditional probability of cause given effect. We argue against this assumption, as it neglects alternative causal structures that may have generated the sample data. Our structure induction model of diagnostic reasoning takes into account the uncertainty regarding the underlying causal structure. A key prediction of the model is that diagnostic judgments should not only reflect the empirical probability of cause given effect but should also depend on the reasoner’s beliefs about the existence and strength of the link between cause and effect. We confirmed this prediction in 2 studies and showed that our theory better accounts for human judgments than alternative theories of diagnostic reasoning. Overall, our findings support the view that in diagnostic reasoning people go “beyond the information given” and use the available data to make inferences on the (unobserved) causal rather than on the (observed) data level.

Journal ArticleDOI
TL;DR: Independent support for the dual-recollection hypothesis is provided by some surprising effects that it predicts, such as release from recollection rejection, false persistence, negative relations between false alarm rates and target remember/know judgments, and recollection without remembering.
Abstract: Recollection is currently modeled as a univariate retrieval process in which memory probes provoke conscious awareness of contextual details of earlier target presentations. However, that conception cannot explain why some manipulations that increase recollection in recognition experiments suppress false memory in false memory experiments, whereas others increase false memory. Such contrasting effects can be explained if recollection is bivariate-if memory probes can provoke conscious awareness of target items per se, separately from awareness of contextual details, with false memory being suppressed by the former but increased by the latter. Interestingly, these 2 conceptions of recollection have coexisted for some time in different segments of the memory literature. Independent support for the dual-recollection hypothesis is provided by some surprising effects that it predicts, such as release from recollection rejection, false persistence, negative relations between false alarm rates and target remember/know judgments, and recollection without remembering. We implemented the hypothesis in 3 bivariate recollection models, which differ in the degree to which recollection is treated as a discrete or a graded process: a pure multinomial model, a pure signal detection model, and a mixed multinomial/signal detection model. The models were applied to a large corpus of conjoint recognition data, with fits being satisfactory when both recollection processes were present and unsatisfactory when either was deleted. Factor analyses of the models' parameter spaces showed that target and context recollection never loaded on a common factor, and the 3 models converged on the same process loci for the effects of important experimental manipulations. Thus, a variety of results were consistent with bivariate recollection. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

Journal ArticleDOI
TL;DR: A computational model is proposed based on three key hypotheses: trial-and-error learning processes drive the progressive development of reaching; the control of the movements based on equilibrium points allows the model to quickly find the initial approximate solution to the problem of gaining contact with the target objects; and the request of precision of the end movement in the presence of muscular noise drives the progressive refinement of the reaching behavior.
Abstract: Despite the huge literature on reaching behavior, a clear idea about the motor control processes underlying its development in infants is still lacking. This article contributes to overcoming this gap by proposing a computational model based on three key hypotheses: (a) trial-and-error learning processes drive the progressive development of reaching; (b) the control of the movements based on equilibrium points allows the model to quickly find the initial approximate solution to the problem of gaining contact with the target objects; (c) the request of precision of the end movement in the presence of muscular noise drives the progressive refinement of the reaching behavior. The tests of the model, based on a two degrees of freedom simulated dynamical arm, show that it is capable of reproducing a large number of empirical findings, most deriving from longitudinal studies with children: the developmental trajectory of several dynamical and kinematic variables of reaching movements, the time evolution of submovements composing reaching, the progressive development of a bell-shaped speed profile, and the evolution of the management of redundant degrees of freedom. The model also produces testable predictions on several of these phenomena. Most of these empirical data have never been investigated by previous computational models and, more important, have never been accounted for by a unique model. In this respect, the analysis of the model functioning reveals that all these results are ultimately explained, sometimes in unexpected ways, by the same developmental trajectory emerging from the interplay of the three mentioned hypotheses: The model first quickly learns to perform coarse movements that assure a contact of the hand with the target (an achievement with great adaptive value) and then slowly refines the detailed control of the dynamical aspects of movement to increase accuracy.

Journal ArticleDOI
TL;DR: The accuracy of Δ-inference can be understood as an approach-avoidance conflict between the decreasing usefulness of the first cue and the increasing usefulness of subsequent cues as Δ grows larger, resulting in a single-peaked function between accuracy and Δ.
Abstract: In a lexicographic semiorders model for preference, cues are searched in a subjective order, and an alternative is preferred if its value on a cue exceeds those of other alternatives by a threshold Δ, akin to a just noticeable difference in perception. We generalized this model from preference to inference and refer to it as Δ-inference. Unlike with preference, where accuracy is difficult to define, the problem a mind faces when making an inference is to select a Δ that can lead to accurate judgments. To find a solution to this problem, we applied Clyde Coombs's theory of single-peaked preference functions. We show that the accuracy of Δ-inference can be understood as an approach-avoidance conflict between the decreasing usefulness of the first cue and the increasing usefulness of subsequent cues as Δ grows larger, resulting in a single-peaked function between accuracy and Δ. The peak of this function varies with the properties of the task environment: The more redundant the cues and the larger the differences in their information quality, the smaller the Δ. An analysis of 39 real-world task environments led to the surprising result that the best inferences are made when Δ is 0, which implies relying almost exclusively on the best cue and ignoring the rest. This finding provides a new perspective on the take-the-best heuristic. Overall, our study demonstrates the potential of integrating and extending established concepts, models, and theories from perception and preference to improve our understanding of how the mind makes inferences.

Journal ArticleDOI
TL;DR: It is shown that Jones and Dzhafarov's attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research.
Abstract: Jones and Dzhafarov (2014) claim that several current models of speeded decision making in cognitive tasks, including the diffusion model, can be viewed as special cases of other general models or model classes. The general models can be made to match any set of response time (RT) distribution and accuracy data exactly by a suitable choice of parameters and so are unfalsifiable. The implication of their claim is that models like the diffusion model are empirically testable only by artificially restricting them to exclude unfalsifiable instances of the general model. We show that Jones and Dzhafarov’s argument depends on enlarging the class of “diffusion” models to include models in which there is little or no diffusion. The unfalsifiable models are deterministic or near-deterministic growth models, from which the effects of within-trial variability have been removed or in which they are constrained to be negligible. These models attribute most or all of the variability in RT and accuracy to across-trial variability in the rate of evidence growth, which is permitted to be distributed arbitrarily and to vary freely across experimental conditions. In contrast, in the standard diffusion model, within-trial variability in evidence is the primary determinant of variability in RT. Across-trial variability, which determines the relative speed of correct responses and errors, is theoretically and empirically constrained. Jones and Dzhafarov’s attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research.

Journal ArticleDOI
TL;DR: It is shown that the most important basic aspects of two-choice response time data can be reproduced and the IDM is shown to predict a variety of observed psychophysical relations such as Piéron's law, the van der Molen-Keuss effect, and Weber's law.
Abstract: The Ising Decision Maker (IDM) is a new formal model for speeded two-choice decision making derived from the stochastic Hopfield network or dynamic Ising model. On a microscopic level, it consists of 2 pools of binary stochastic neurons with pairwise interactions. Inside each pool, neurons excite each other, whereas between pools, neurons inhibit each other. The perceptual input is represented by an external excitatory field. Using methods from statistical mechanics, the high-dimensional network of neurons (microscopic level) is reduced to a two-dimensional stochastic process, describing the evolution of the mean neural activity per pool (macroscopic level). The IDM can be seen as an abstract, analytically tractable multiple attractor network model of information accumulation. In this article, the properties of the IDM are studied, the relations to existing models are discussed, and it is shown that the most important basic aspects of two-choice response time data can be reproduced. In addition, the IDM is shown to predict a variety of observed psychophysical relations such as Pieron's law, the van der Molen-Keuss effect, and Weber's law. Using Bayesian methods, the model is fitted to both simulated and real data, and its performance is compared to the Ratcliff diffusion model.

Journal ArticleDOI
TL;DR: A dynamic Thurstonian item response theory (IRT) model is described that builds on dynamic system theories of motivation, theorizing on the PSE response process, and recent advancements in Thurstoneian IRT modeling of choice data to suggest that PSE motive measures have long been reliable and increase the scientific value of extant evidence from motivational research using PSE motives measures.
Abstract: The measurement of implicit or unconscious motives using the picture story exercise (PSE) has long been a target of debate in the psychological literature. Most debates have centered on the apparent paradox that PSE measures of implicit motives typically show low internal consistency reliability on common indices like Cronbach's alpha but nevertheless predict behavioral outcomes. I describe a dynamic Thurstonian item response theory (IRT) model that builds on dynamic system theories of motivation, theorizing on the PSE response process, and recent advancements in Thurstonian IRT modeling of choice data. To assess the models' capability to explain the internal consistency paradox, I first fitted the model to archival data (Gurin, Veroff, & Feld, 1957) and then simulated data based on bias-corrected model estimates from the real data. Simulation results revealed that the average squared correlation reliability for the motives in the Thurstonian IRT model was .74 and that Cronbach's alpha values were similar to the real data (<.35). These findings suggest that PSE motive measures have long been reliable and increase the scientific value of extant evidence from motivational research using PSE motive measures.

Journal ArticleDOI
TL;DR: It is suggested that the superposition constraint plays a role in explaining the existence of selective codes in cortex, and it is shown that a recurrent parallel distributed processing network trained to code for multiple words at the same time over the same set of units learns localist letter and word codes.
Abstract: A key insight from 50 years of neurophysiology is that some neurons in cortex respond to information in a highly selective manner. Why is this? We argue that selective representations support the coactivation of multiple "things" (e.g., words, objects, faces) in short-term memory, whereas nonselective codes are often unsuitable for this purpose. That is, the coactivation of nonselective codes often results in a blend pattern that is ambiguous; the so-called superposition catastrophe. We show that a recurrent parallel distributed processing network trained to code for multiple words at the same time over the same set of units learns localist letter and word codes, and the number of localist codes scales with the level of the superposition. Given that many cortical systems are required to coactivate multiple things in short-term memory, we suggest that the superposition constraint plays a role in explaining the existence of selective codes in cortex.

Journal ArticleDOI
TL;DR: Jones and Dzhafarov (2014) provided a useful service in pointing out that some assumptions of modern decision-making models require additional scrutiny, but their main result is not surprising: If an infinitely complex model was created by assigning its parameters arbitrarily flexible distributions, this new model would be able to fit any observed data perfectly.
Abstract: Jones and Dzhafarov (2014) provided a useful service in pointing out that some assumptions of modern decision-making models require additional scrutiny. Their main result, however, is not surprising: If an infinitely complex model was created by assigning its parameters arbitrarily flexible distributions, this new model would be able to fit any observed data perfectly. Such a hypothetical model would be unfalsifiable. This is exactly why such models have never been proposed in over half a century of model development in decision making. Additionally, the main conclusion drawn from this result—that the success of existing decision-making models can be attributed to assumptions about parameter distributions— is wrong.

Journal ArticleDOI
TL;DR: A good qualitative account of word similarities may be obtained by adjusting the cosine between word vectors from latent semantic analysis for vector lengths in a manner analogous to the quantum geometric model of similarity.
Abstract: A good qualitative account of word similarities may be obtained by adjusting the cosine between word vectors from latent semantic analysis for vector lengths in a manner analogous to the quantum geometric model of similarity.

Journal ArticleDOI
TL;DR: State-of-the-art frequentist and Bayesian "order-constrained" inference suggest that PRAM accounts poorly for individual subject laboratory data from 67 participants, but this conclusion is robust across 7 different utility functions for money and remains largely unaltered when considering a prior unpublished version of PRAM that featured an additional free parameter in the perception function for probabilities.
Abstract: Loomes (2010, Psychological Review) proposed the Perceived Relative Argument Model (PRAM) as a novel descriptive theory for risky choice. PRAM differs from models like prospect theory in that decision makers do not compare 2 prospects by first assigning each prospect an overall utility and then choosing the prospect with the higher overall utility. Instead, the decision maker determines the relative argument for one or the other prospect separately for outcomes and probabilities, before reaching an overall pairwise preference. Loomes (2010) did not model variability in choice behavior. We consider 2 types of "stochastic specification" of PRAM. In one, a decision maker has a fixed preference, and choice variability is caused by occasional errors/trembles. In the other, the parameters of the perception functions for outcomes and for probabilities are random, with no constraints on their joint distribution. State-of-the-art frequentist and Bayesian "order-constrained" inference suggest that PRAM accounts poorly for individual subject laboratory data from 67 participants. This conclusion is robust across 7 different utility functions for money and remains largely unaltered also when considering a prior unpublished version of PRAM (Loomes, 2006) that featured an additional free parameter in the perception function for probabilities. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

Journal ArticleDOI
TL;DR: A simulation of the DM is summarized that retains the growth-rate invariance assumption, requires thegrowth-rate distribution to be unimodal, and maintains a contribution of diffusion as large as in past fits of the standard model.
Abstract: Jones and Dzhafarov (2014) proved the linear ballistic accumulator (LBA) and diffusion model (DM) of speeded choice become unfalsifiable if 2 assumptions are removed: that growth rate variability between trials follows a Gaussian distribution and that this distribution is invariant under certain experimental manipulations. The former assumption is purely technical and has never been claimed as a theoretical commitment, and the latter is logically and empirically suspect. Heathcote, Wagenmakers, and Brown (2014) questioned the distinction between theoretical and technical assumptions and argued that only the predictions of the whole model matter. We respond that it is valuable to understand how a model's predictions depend on each of its assumptions to know what is critical to an explanation and to generalize principles across phenomena or domains. Smith, Ratcliff, and McKoon (2014) claimed unfalsifiability of the generalized DM relies on parameterizations with negligible diffusion and proposed a theoretical commitment to simple growth-rate distributions. We respond that a lower bound on diffusion would be a new, ad hoc assumption, and restrictions on growth-rate distributions are only theoretically justified if one supplies a model of what determines growth-rate variability. Finally, we summarize a simulation of the DM that retains the growth-rate invariance assumption, requires the growth-rate distribution to be unimodal, and maintains a contribution of diffusion as large as in past fits of the standard model. The simulation demonstrates mimicry between models with different theoretical assumptions, showing the problems of excess flexibility are not limited to the cases to which Smith et al. objected. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

Journal ArticleDOI
TL;DR: This study took the perceived relative argument model, added various auxiliary assumptions of their own about the utility of money, made assumptions about possible stochastic specifications, and tested the various combined models against data from an experiment they conducted.
Abstract: Guo and Regenwetter (2014) took the perceived relative argument model, added various auxiliary assumptions of their own about the utility of money, made assumptions about possible stochastic specifications, and tested the various combined models against data from an experiment they conducted. However, their modeling assumptions were questionable and their experiment was unsatisfactory: The stimuli omitted crucial information, the incentives were weak, and the task load was excessive. These shortcomings undermine the quality of the data, and the study provides no new information about the scope and limitations of the perceived relative argument model or its performance relative to other models of risky choice. (PsycINFO Database Record (c) 2014 APA, all rights reserved).