scispace - formally typeset
Search or ask a question

Showing papers in "Psychological Review in 2016"


Journal ArticleDOI
TL;DR: It is speculated that default passivity and the compensating detection and expectation of control may have substantial implications for how to treat depression.
Abstract: Learned helplessness, the failure to escape shock induced by uncontrollable aversive events, was discovered half a century ago. Seligman and Maier (1967) theorized that animals learned that outcomes were independent of their responses-that nothing they did mattered-and that this learning undermined trying to escape. The mechanism of learned helplessness is now very well-charted biologically, and the original theory got it backward. Passivity in response to shock is not learned. It is the default, unlearned response to prolonged aversive events and it is mediated by the serotonergic activity of the dorsal raphe nucleus, which in turn inhibits escape. This passivity can be overcome by learning control, with the activity of the medial prefrontal cortex, which subserves the detection of control leading to the automatic inhibition of the dorsal raphe nucleus. So animals learn that they can control aversive events, but the passive failure to learn to escape is an unlearned reaction to prolonged aversive stimulation. In addition, alterations of the ventromedial prefrontal cortex-dorsal raphe pathway can come to subserve the expectation of control. We speculate that default passivity and the compensating detection and expectation of control may have substantial implications for how to treat depression. (PsycINFO Database Record

423 citations


Journal ArticleDOI
TL;DR: The Causal Attitude Network (CAN) model is introduced, which conceptualizes attitudes as networks consisting of evaluative reactions and interactions between these reactions, and is argued that the CAN model provides a realistic formalized measurement model of attitudes and therefore fills a crucial gap in the attitude literature.
Abstract: This article introduces the Causal Attitude Network (CAN) model, which conceptualizes attitudes as networks consisting of evaluative reactions and interactions between these reactions. Relevant evaluative reactions include beliefs, feelings, and behaviors toward the attitude object. Interactions between these reactions arise through direct causal influences (e.g., the belief that snakes are dangerous causes fear of snakes) and mechanisms that support evaluative consistency between related contents of evaluative reactions (e.g., people tend to align their belief that snakes are useful with their belief that snakes help maintain ecological balance). In the CAN model, the structure of attitude networks conforms to a small-world structure: evaluative reactions that are similar to each other form tight clusters, which are connected by a sparser set of "shortcuts" between them. We argue that the CAN model provides a realistic formalized measurement model of attitudes and therefore fills a crucial gap in the attitude literature. Furthermore, the CAN model provides testable predictions for the structure of attitudes and how they develop, remain stable, and change over time. Attitude strength is conceptualized in terms of the connectivity of attitude networks and we show that this provides a parsimonious account of the differences between strong and weak attitudes. We discuss the CAN model in relation to possible extensions, implication for the assessment of attitudes, and possibilities for further study.

217 citations


Journal ArticleDOI
TL;DR: It is argued that depictions are physical scenes that people stage for others to use in imagining the scenes they are depicting, and this theory accounts for a diverse set of features of everyday depictions.
Abstract: In everyday discourse, people describe and point at things, but they also depict things with their hands, arms, head, face, eyes, voice, and body, with and without props. Examples are iconic gestures, facial gestures, quotations of many kinds, full-scale demonstrations, and make-believe play. Depicting, it is argued, is a basic method of communication. It is on a par with describing and pointing, but it works by different principles. The proposal here, called staging theory, is that depictions are physical scenes that people stage for others to use in imagining the scenes they are depicting. Staging a scene is the same type of act that is used by children in make-believe play and by the cast and crew in stage plays. This theory accounts for a diverse set of features of everyday depictions. Although depictions are integral parts of everyday utterances, they are absent from standard models of language processing. To be complete, these models will have to account for depicting as well as describing and pointing.

202 citations


Journal ArticleDOI
TL;DR: It is shown that there are no current examples of neuroscience motivating new and effective teaching methods, and it is argued that neuroscience is unlikely to improve teaching in the future.
Abstract: The core claim of educational neuroscience is that neuroscience can improve teaching in the classroom. Many strong claims are made about the successes and the promise of this new discipline. By contrast, I show that there are no current examples of neuroscience motivating new and effective teaching methods, and argue that neuroscience is unlikely to improve teaching in the future. The reasons are twofold. First, in practice, it is easier to characterize the cognitive capacities of children on the basis of behavioral measures than on the basis of brain measures. As a consequence, neuroscience rarely offers insights into instruction above and beyond psychology. Second, in principle, the theoretical motivations underpinning educational neuroscience are misguided, and this makes it difficult to design or assess new teaching methods on the basis of neuroscience. Regarding the design of instruction, it is widely assumed that remedial instruction should target the underlying deficits associated with learning disorders, and neuroscience is used to characterize the deficit. However, the most effective forms of instruction may often rely on developing compensatory (nonimpaired) skills. Neuroscience cannot determine whether instruction should target impaired or nonimpaired skills. More importantly, regarding the assessment of instruction, the only relevant issue is whether the child learns, as reflected in behavior. Evidence that the brain changed in response to instruction is irrelevant. At the same time, an important goal for neuroscience is to characterize how the brain changes in response to learning, and this includes learning in the classroom. Neuroscientists cannot help educators, but educators can help neuroscientists. (PsycINFO Database Record

156 citations


Journal ArticleDOI
TL;DR: The reasoning-based approach seems to be promising for understanding the current literature, even if it is not fully satisfactory because of a certain number of findings easier to interpret with regard to the manipulation- based approach.
Abstract: Tool use is a defining feature of human species. Therefore, a fundamental issue is to understand the cognitive bases of human tool use. Given that people cannot use tools without manipulating them, proponents of the manipulation-based approach have argued that tool use might be supported by the simulation of past sensorimotor experiences, also sometimes called affordances. However, in the meanwhile, evidence has been accumulated demonstrating the critical role of mechanical knowledge in tool use (i.e., the reasoning-based approach). The major goal of the present article is to examine the validity of the assumptions derived from the manipulation-based versus the reasoning-based approach. To do so, we identified 3 key issues on which the 2 approaches differ, namely, (a) the reference frame issue, (b) the intention issue, and (c) the action domain issue. These different issues will be addressed in light of studies in experimental psychology and neuropsychology that have provided valuable contributions to the topic (i.e., tool-use interaction, orientation effect, object-size effect, utilization behavior and anarchic hand, tool use and perception, apraxia of tool use, transport vs. use actions). To anticipate our conclusions, the reasoning-based approach seems to be promising for understanding the current literature, even if it is not fully satisfactory because of a certain number of findings easier to interpret with regard to the manipulation-based approach. A new avenue for future research might be to develop a framework accommodating both approaches, thereby shedding a new light on the cognitive bases of human tool use and affordances. (PsycINFO Database Record

149 citations


Journal ArticleDOI
TL;DR: A dual process model is used to understand the psychology underlying magical thinking, highlighting features of System 1 that generate magical intuitions and features of the person or situation that prompt System 2 to correct them and suggesting that the model can be improved by decoupling the detection of errors from their correction and recognizing acquiescence as a possible System 2 response.
Abstract: Traditionally, research on superstition and magical thinking has focused on people's cognitive shortcomings, but superstitions are not limited to individuals with mental deficits. Even smart, educated, emotionally stable adults have superstitions that are not rational. Dual process models--such as the corrective model advocated by Kahneman and Frederick (2002, 2005), which suggests that System 1 generates intuitive answers that may or may not be corrected by System 2--are useful for illustrating why superstitious thinking is widespread, why particular beliefs arise, and why they are maintained even though they are not true. However, to understand why superstitious beliefs are maintained even when people know they are not true requires that the model be refined. It must allow for the possibility that people can recognize--in the moment--that their belief does not make sense, but act on it nevertheless. People can detect an error, but choose not to correct it, a process I refer to as acquiescence. The first part of the article will use a dual process model to understand the psychology underlying magical thinking, highlighting features of System 1 that generate magical intuitions and features of the person or situation that prompt System 2 to correct them. The second part of the article will suggest that we can improve the model by decoupling the detection of errors from their correction and recognizing acquiescence as a possible System 2 response. I suggest that refining the theory will prove useful for understanding phenomena outside of the context of magical thinking.

142 citations


Journal ArticleDOI
TL;DR: The argument that death by suicide among humans is an exemplar of psychopathology and is due to a derangement of the self-sacrificial behavioral suite found among eusocial species is made.
Abstract: Building upon the idea that humans may be a eusocial species (i.e., rely on multigenerational and cooperative care of young, utilize division of labor for successful survival), we conjecture that suicide among humans represents a derangement of the self-sacrificial aspect of eusociality. In this article, we outline the characteristics of eusociality, particularly the self-sacrificial behavior seen among other eusocial species (e.g., insects, shrimp, mole rats). We then discuss parallels between eusocial self-sacrificial behavior in nonhumans and suicide in humans, particularly with regard to overarousal states, withdrawal phenomena, and perceptions of burdensomeness. In so doing, we make the argument that death by suicide among humans is an exemplar of psychopathology and is due to a derangement of the self-sacrificial behavioral suite found among eusocial species. Implications and future directions for research are also presented.

121 citations


Journal ArticleDOI
TL;DR: This work shows how different sets of LOT primitives, embedded in a psychologically realistic approximate Bayesian inference framework, systematically predict distinct learning curves in rule-based concept learning experiments, and shows how specific LOT theories can be distinguished empirically.
Abstract: The notion of a compositional language of thought (LOT) has been central in computational accounts of cognition from earliest attempts (Boole, 1854; Fodor, 1975) to the present day (Feldman, 2000; Penn, Holyoak, & Povinelli, 2008; Fodor, 2008; Kemp, 2012; Goodman, Tenenbaum, & Gerstenberg, 2015). Recent modeling work shows how statistical inferences over compositionally structured hypothesis spaces might explain learning and development across a variety of domains. However, the primitive components of such representations are typically assumed a priori by modelers and theoreticians rather than determined empirically. We show how different sets of LOT primitives, embedded in a psychologically realistic approximate Bayesian inference framework, systematically predict distinct learning curves in rule-based concept learning experiments. We use this feature of LOT models to design a set of large-scale concept learning experiments that can determine the most likely primitives for psychological concepts involving Boolean connectives and quantification. Subjects' inferences are most consistent with a rich (nonminimal) set of Boolean operations, including first-order, but not second-order, quantification. Our results more generally show how specific LOT theories can be distinguished empirically. (PsycINFO Database Record

117 citations


Journal ArticleDOI
TL;DR: A novel, computationally explicit, theory of age-related memory change within the framework of the context maintenance and retrieval (CMR2) model of memory search is developed and the magnitude of age differences in recognition memory accuracy is predicted.
Abstract: We develop a novel, computationally explicit, theory of age-related memory change within the framework of the context maintenance and retrieval (CMR2) model of memory search. We introduce a set of benchmark findings from the free recall and recognition tasks that include aspects of memory performance that show both age-related stability and decline. We test aging theories by lesioning the corresponding mechanisms in a model fit to younger adult free recall data. When effects are considered in isolation, many theories provide an adequate account, but when all effects are considered simultaneously, the existing theories fail. We develop a novel theory by fitting the full model (i.e., allowing all parameters to vary) to individual participants and comparing the distributions of parameter values for older and younger adults. This theory implicates 4 components: (a) the ability to sustain attention across an encoding episode, (b) the ability to retrieve contextual representations for use as retrieval cues, (c) the ability to monitor retrievals and reject intrusions, and (d) the level of noise in retrieval competitions. We extend CMR2 to simulate a recognition memory task using the same mechanisms the free recall model uses to reject intrusions. Without fitting any additional parameters, the 4-component theory that accounts for age differences in free recall predicts the magnitude of age differences in recognition memory accuracy. Confirming a prediction of the model, free recall intrusion rates correlate positively with recognition false alarm rates. Thus, we provide a 4-component theory of a complex pattern of age differences across 2 key laboratory tasks.

106 citations


Journal ArticleDOI
TL;DR: How researchers can use the TRI Model to achieve a more sophisticated view of personality's impact on life outcomes, developmental trajectories, genetic origins, person-situation interactions, and stereotyped judgments is discussed.
Abstract: Personality and social psychology have historically been divided between personality researchers who study the impact of traits and social-cognitive researchers who study errors in trait judgments. However, a broader view of personality incorporates not only individual differences in underlying traits but also individual differences in the distinct ways a person's personality is construed by oneself and by others. Such unique insights are likely to appear in the idiosyncratic personality judgments that raters make and are likely to have etiologies and causal force independent of trait perceptions shared across raters. Drawing on the logic of the Johari window (Luft & Ingham, 1955), the Self-Other Knowledge Asymmetry Model (Vazire, 2010), and Socioanalytic Theory (Hogan, 1996; Hogan & Blickle, 2013), we present a new model that separates personality variance into consensus about underlying traits (Trait), unique self-perceptions (Identity), and impressions conveyed to others that are distinct from self-perceptions (Reputation). We provide three demonstrations of how this Trait-Reputation-Identity (TRI) Model can be used to understand (a) consensus and discrepancies across rating sources, (b) personality's links with self-evaluation and self-presentation, and (c) gender differences in traits. We conclude by discussing how researchers can use the TRI Model to achieve a more sophisticated view of personality's impact on life outcomes, developmental trajectories, genetic origins, person-situation interactions, and stereotyped judgments. (PsycINFO Database Record

99 citations


Journal ArticleDOI
TL;DR: It is argued that Bowers' assertions misrepresent the nature and aims of the work in Educational Neuroscience, and it is suggested that, by contrast, psychological and neural levels of explanation complement rather than compete with each other.
Abstract: In his recent critique of Educational Neuroscience, Bowers argues that neuroscience has no role to play in informing education, which he equates with classroom teaching. Neuroscience, he suggests, adds nothing to what we can learn from psychology. In this commentary, we argue that Bowers' assertions misrepresent the nature and aims of the work in this new field. We suggest that, by contrast, psychological and neural levels of explanation complement rather than compete with each other. Bowers' analysis also fails to include a role for educational expertise – a guiding principle of our new field. On this basis, we conclude that his critique is potentially misleading. We set out the well-documented goals of research in Educational Neuroscience, and show how, in collaboration with educators, significant progress has already been achieved, with the prospect of even greater progress in the future.

Journal ArticleDOI
TL;DR: It is found that all simple rules tested fail to describe human behavior, that the optimal rule accounts well for the data, and that several complex suboptimal rules are indistinguishable from the optimal one.
Abstract: Two prominent ideas in the study of decision making have been that organisms behave near-optimally, and that they use simple heuristic rules. These principles might be operating in different types of tasks, but this possibility cannot be fully investigated without a direct, rigorous comparison within a single task. Such a comparison was lacking in most previous studies, because (a) the optimal decision rule was simple, (b) no simple suboptimal rules were considered, (c) it was unclear what was optimal, or (d) a simple rule could closely approximate the optimal rule. Here, we used a perceptual decision-making task in which the optimal decision rule is well-defined and complex, and makes qualitatively distinct predictions from many simple suboptimal rules. We find that all simple rules tested fail to describe human behavior, that the optimal rule accounts well for the data, and that several complex suboptimal rules are indistinguishable from the optimal one. Moreover, we found evidence that the optimal model is close to the true model: First, the better the trial-to-trial predictions of a suboptimal model agree with those of the optimal model, the better that suboptimal model fits; second, our estimate of the Kullback-Leibler divergence between the optimal model and the true model is not significantly different from zero. When observers receive no feedback, the optimal model still describes behavior best, suggesting that sensory uncertainty is implicitly represented and taken into account. Beyond the task and models studied here, our results have implications for best practices of model comparison. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: Findings indicate that educational neuroscience, at a minimum, has provided novel insights into the possibilities of individualized education for students, rather than the current practice of learning through failure that a curriculum did not support a student.
Abstract: Bowers (2016) argues that there are practical and principled problems with how educational neuroscience may contribute to education, including lack of direct influences on teaching in the classroom. Some of the arguments made are convincing, including the critique of unsubstantiated claims about the impact of educational neuroscience and the reminder that the primary outcomes of education are behavioral, such as skill in reading or mathematics. Bowers' analysis falls short in 3 major respects. First, educational neuroscience is a basic science that has made unique contributions to basic education research; it is not part of applied classroom instruction. Second, educational neuroscience contributes to ideas about education practices and policies beyond classroom curriculum that are important for helping vulnerable students. Third, educational neuroscience studies using neuroimaging have not only revealed for the first time the brain basis of neurodevelopmental differences that have profound influences on educational outcomes, but have also identified individual brain differences that predict which students learn more or learn less from various curricula. In several cases, the brain measures significantly improved or vastly outperformed conventional behavioral measures in predicting what works for individual children. These findings indicate that educational neuroscience, at a minimum, has provided novel insights into the possibilities of individualized education for students, rather than the current practice of learning through failure that a curriculum did not support a student. In the best approach to improving education, educational neuroscience ought to contribute to basic research addressing the needs of students and teachers. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: The 2-dimensional model provides a process account of working memory precision and its relationship with the diffusion model, and a new way to investigate the properties ofWorking memory, via the distributions of decision times.
Abstract: I present a diffusion model for decision making in continuous report tasks, in which a continuous, circularly distributed, stimulus attribute in working memory is matched to a representation of the attribute in the stimulus display. Memory retrieval is modeled as a 2-dimensional diffusion process with vector-valued drift on a disk, whose bounding circle represents the decision criterion. The direction and magnitude of the drift vector describe the identity of the stimulus and the quality of its representation in memory, respectively. The point at which the diffusion exits the disk determines the reported value of the attribute and the time to exit the disk determines the decision time. Expressions for the joint distribution of decision times and report outcomes are obtained by means of the Girsanov change-of-measure theorem, which allows the properties of the nonzero-drift diffusion process to be characterized as a function of a Euclidian-distance Bessel process. Predicted report precision is equal to the product of the decision criterion and the drift magnitude and follows a von Mises distribution, in agreement with the treatment of precision in the working memory literature. Trial-to-trial variability in criterion and drift rate leads, respectively, to direct and inverse relationships between report accuracy and decision times, in agreement with, and generalizing, the standard diffusion model of 2-choice decisions. The 2-dimensional model provides a process account of working memory precision and its relationship with the diffusion model, and a new way to investigate the properties of working memory, via the distributions of decision times. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: It is demonstrated that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals.
Abstract: Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types-including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: The approach identifies on a trial-by-trial basis where brief sinusoidal peaks (called bumps) are added to the ongoing electroencephalographic signal and proposes that these bumps mark the onset of critical cognitive stages in processing.
Abstract: We introduce a method for measuring the number and durations of processing stages from the electroencephalographic signal and apply it to the study of associative recognition. Using an extension of past research that combines multivariate pattern analysis with hidden semi-Markov models, the approach identifies on a trial-by-trial basis where brief sinusoidal peaks (called bumps) are added to the ongoing electroencephalographic signal. We propose that these bumps mark the onset of critical cognitive stages in processing. The results of the analysis can be used to guide the development of detailed process models. Applied to the associative recognition task, the hidden semi-Markov models multivariate pattern analysis method indicates that the effects of associative strength and probe type are localized to a memory retrieval stage and a decision stage. This is in line with a previously developed the adaptive control of thought-rational process model, called ACT-R, of the task. As a test of the generalization of our method we also apply it to a data set on the Sternberg working memory task collected by Jacobs, Hwang, Curran, and Kahana (2006). The analysis generalizes robustly, and localizes the typical set size effect in a late comparison/decision stage. In addition to providing information about the number and durations of stages in associative recognition, our analysis sheds light on the event-related potential components implicated in the study of recognition memory. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: A unified theory of decision making and timing is supported in terms of a common, underlying spike-counting process, compactly represented as a diffusion process that predicts time-scale invariant decision time distributions in perceptual decision making, and time- scale invariant response time distribution in interval timing.
Abstract: Weber's law is the canonical scale-invariance law in psychology: when the intensities of 2 stimuli are scaled by any value k, the just-noticeable-difference between them also scales by k. A diffusion model that approximates a spike-counting process accounts for Weber's law (Link, 1992), but there exist surprising corollaries of this account that have not yet been described or tested. We show that (a) this spike-counting diffusion model predicts time-scale invariant decision time distributions in perceptual decision making, and time-scale invariant response time (RT) distributions in interval timing; (b) for 2-choice perceptual decisions, the model predicts equal accuracy but faster responding for stimulus pairs with equally scaled-up intensities; (c) the coefficient of variation (CV) of decision times should remain constant across average intensity scales, but should otherwise decrease as a specific function of stimulus discriminability and speed-accuracy trade-off; and (d) for timing tasks, RT CVs should be constant for all durations, and RT skewness should always equal 3 times the CV. We tested these predictions using visual, auditory and vibrotactile decision tasks and visual interval timing tasks in humans. The data conformed closely to the predictions in all modalities. These results support a unified theory of decision making and timing in terms of a common, underlying spike-counting process, compactly represented as a diffusion process.

Journal ArticleDOI
TL;DR: In this article, the authors use a formal theory of teaching, validated through experiments in other domains, as the basis for a detailed analysis of whether IDS is well designed for teaching phonetic categories.
Abstract: Infant-directed speech (IDS) has distinctive properties that differ from adult-directed speech (ADS). Why it has these properties-and whether they are intended to facilitate language learning-is a matter of contention. We argue that much of this disagreement stems from lack of a formal, guiding theory of how phonetic categories should best be taught to infantlike learners. In the absence of such a theory, researchers have relied on intuitions about learning to guide the argument. We use a formal theory of teaching, validated through experiments in other domains, as the basis for a detailed analysis of whether IDS is well designed for teaching phonetic categories. Using the theory, we generate ideal data for teaching phonetic categories in English. We qualitatively compare the simulated teaching data with human IDS, finding that the teaching data exhibit many features of IDS, including some that have been taken as evidence IDS is not for teaching. The simulated data reveal potential pitfalls for experimentalists exploring the role of IDS in language learning. Focusing on different formants and phoneme sets leads to different conclusions, and the benefit of the teaching data to learners is not apparent until a sufficient number of examples have been provided. Finally, we investigate transfer of IDS to learning ADS. The teaching data improve classification of ADS data but only for the learner they were generated to teach, not universally across all classes of learners. This research offers a theoretically grounded framework that empowers experimentalists to systematically evaluate whether IDS is for teaching. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: This article proposes an alternative approach to the estimation of choice RT models that elegantly bypasses the specification of the nondecision time distribution by means of an unconventional convolution of data and decision model distributions (hence called the D*M approach).
Abstract: Choice reaction time (RT) experiments are an invaluable tool in psychology and neuroscience. A common assumption is that the total choice response time is the sum of a decision and a nondecision part (time spent on perceptual and motor processes). While the decision part is typically modeled very carefully (commonly with diffusion models), a simple and ad hoc distribution (mostly uniform) is assumed for the nondecision component. Nevertheless, it has been shown that the misspecification of the nondecision time can severely distort the decision model parameter estimates. In this article, we propose an alternative approach to the estimation of choice RT models that elegantly bypasses the specification of the nondecision time distribution by means of an unconventional convolution of data and decision model distributions (hence called the D*M approach). Once the decision model parameters have been estimated, it is possible to compute a nonparametric estimate of the nondecision time distribution. The technique is tested on simulated data, and is shown to systematically remove traditional estimation bias related to misspecified nondecision time, even for a relatively small number of observations. The shape of the actual underlying nondecision time distribution can also be recovered. Next, the D*M approach is applied to a selection of existing diffusion model application articles. For all of these studies, substantial quantitative differences with the original analyses are found. For one study, these differences radically alter its final conclusions, underlining the importance of our approach. Additionally, we find that strongly right skewed nondecision time distributions are not at all uncommon.

Journal ArticleDOI
TL;DR: This work develops a computational model framework realizing componential representations of multisymbol numbers and evaluated its validity by simulating standard empirical effects of number magnitude comparison, and provides evidence that the model framework can be integrated into the more general context of multiattribute decision making.
Abstract: Different models have been proposed for the processing of multisymbol numbers like two- and three-digit numbers but also for negative numbers and decimals. However, these multisymbol numbers are assembled from the same set of Arabic digits and comply with the place-value structure of the Arabic number system. Considering these shared properties, we suggest that the processing of multisymbol numbers can be described in one general model framework. Accordingly, we first developed a computational model framework realizing componential representations of multisymbol numbers and evaluated its validity by simulating standard empirical effects of number magnitude comparison. We observed that the model framework successfully accounted for most of these effects. Moreover, our simulations provided first evidence supporting the notion of a fully componential processing of multisymbol numbers for the specific case of comparing two negative numbers. Thus, our general model framework indicates that the processing of different kinds of multisymbol integer and decimal numbers shares common characteristics (e.g., componential representation). The relevance and applicability of our model goes beyond the case of basic number processing. In particular, we also successfully simulated effects from applied marketing and consumer research by accounting for the left-digit effect found in processing of prices. Finally, we provide evidence that our model framework can be integrated into the more general context of multiattribute decision making. In sum, this indicates that our model framework captures a general scheme of separate processing of different attributes weighted by their saliency for the task at hand. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: The model of vigilant care is proposed, a flexible framework within which parents adjust their level of involvement to the warning signals they detect that offers a unified solution to the ongoing controversy and generates theoretical hypotheses as well as a practice-oriented research program.
Abstract: Parental monitoring was once considered to be the approved way for preventing risk behaviors by children and adolescents. In the last years, however, the concept has been the target of cogent criticism questioning the interpretation of findings which support the traditional view of monitoring. After reviewing the various criticisms and the resulting fragmentation of theory and practice, we propose the model of vigilant care as an integrative solution. Vigilant care is a flexible framework within which parents adjust their level of involvement to the warning signals they detect. By justifying moves to higher levels of vigilance with safety considerations and expressing their duty to do so in a decided but noncontrolling manner, parents legitimize their increased involvement both to the child and to themselves. The model offers a unified solution to the ongoing controversy and generates theoretical hypotheses as well as a practice-oriented research program.

Journal ArticleDOI
TL;DR: It is shown that phonetic segments are articulatory and dynamic and that coarticulation does not eliminate them, which suggests that speech is well-adapted to public communication in facilitating, not creating a barrier to, exchange of language forms.
Abstract: We revisit an article, "Perception of the Speech Code" (PSC), published in this journal 50 years ago (Liberman, Cooper, Shankweiler, & Studdert-Kennedy, 1967) and address one of its legacies concerning the status of phonetic segments, which persists in theories of speech today. In the perspective of PSC, segments both exist (in language as known) and do not exist (in articulation or the acoustic speech signal). Findings interpreted as showing that speech is not a sound alphabet, but, rather, phonemes are encoded in the signal, coupled with findings that listeners perceive articulation, led to the motor theory of speech perception, a highly controversial legacy of PSC. However, a second legacy, the paradoxical perspective on segments has been mostly unquestioned. We remove the paradox by offering an alternative supported by converging evidence that segments exist in language both as known and as used. We support the existence of segments in both language knowledge and in production by showing that phonetic segments are articulatory and dynamic and that coarticulation does not eliminate them. We show that segments leave an acoustic signature that listeners can track. This suggests that speech is well-adapted to public communication in facilitating, not creating a barrier to, exchange of language forms.

Journal ArticleDOI
TL;DR: This work uses visually guided braking as a representative behavior and constructs a novel dynamical model that demonstrates the possibility of understanding visually guided action as respecting the limits of the actor's capabilities, while still being guided by informational variables associated with desired states of affairs.
Abstract: Behavioral dynamics is a framework for understanding adaptive behavior as arising from the self-organizing interaction between animal and environment. The methods of nonlinear dynamics provide a language for describing behavior that is both stable and flexible. Behavioral dynamics has been criticized for ignoring the animal's sensitivity to its own capabilities, leading to the development of an alternative framework: affordance-based control. Although it is theoretically sound and empirically motivated, affordance-based control has resisted characterization in terms of nonlinear dynamics. Here, we provide a dynamical description of affordance-based control, extending behavioral dynamics to meet its criticisms. We propose a general modeling strategy consistent with both theories. We use visually guided braking as a representative behavior and construct a novel dynamical model. This model demonstrates the possibility of understanding visually guided action as respecting the limits of the actor's capabilities, while still being guided by informational variables associated with desired states of affairs. In addition to such "hard" constraints on behavior, our framework allows for the influence of "soft" constraints such as preference and comfort, opening a new area of inquiry in perception-action dynamics.

Journal ArticleDOI
TL;DR: The small-number advantage and the log effect are enhanced in dual-task setting and are further enhanced when the delay between the 2 tasks is shortened, suggesting that these effects originate from a central stage of quantification and decision making.
Abstract: The number-to-position task, in which children and adults are asked to place numbers on a spatial number line, has become a classic measure of number comprehension We present a detailed experimental and theoretical dissection of the processing stages that underlie this task We used a continuous finger-tracking technique, which provides detailed information about the time course of processing stages When adults map the position of 2-digit numbers onto a line, their final mapping is essentially linear, but intermediate finger location show a transient logarithmic mapping We identify the origins of this log effect: Small numbers are processed faster than large numbers, so the finger deviates toward the target position earlier for small numbers than for large numbers When the trajectories are aligned on the finger deviation onset, the log effect disappears The small-number advantage and the log effect are enhanced in dual-task setting and are further enhanced when the delay between the 2 tasks is shortened, suggesting that these effects originate from a central stage of quantification and decision making We also report cases of logarithmic mapping-by children and by a brain-injured individual-which cannot be explained by faster responding to small numbers We show that these findings are captured by an ideal-observer model of the number-to-position mapping task, comprising 3 distinct stages: a quantification stage, whose duration is influenced by both exact and approximate representations of numerical quantity; a Bayesian accumulation-of-evidence stage, leading to a decision about the target location; and a pointing stage (PsycINFO Database Record

Journal ArticleDOI
TL;DR: It is explained how latent variables can be understood simply as parsimonious summaries of data, and how statistical inference can be based on choosing those summaries that minimize information required to represent the data using the model.
Abstract: In their recent article, How Functionalist and Process Approaches to Behavior Can Explain Trait Covariation, Wood, Gardner, and Harms (2015) underscore the need for more process-based understandings of individual differences. At the same time, the article illustrates a common error in the use and interpretation of latent variable models: namely, the misuse of models to arbitrate issues of causation and the nature of latent variables. Here, we explain how latent variables can be understood simply as parsimonious summaries of data, and how statistical inference can be based on choosing those summaries that minimize information required to represent the data using the model. Although Wood, Gardner, and Harms acknowledge this perspective, they underestimate its significance, including its importance to modeling and the conceptualization of psychological measurement. We believe this perspective has important implications for understanding individual differences in a number of domains, including current debates surrounding the role of formative versus reflective latent variables.

Journal ArticleDOI
TL;DR: The main responses raised are considered and it is found that there are still no examples of EN providing new insights to teaching in the classroom, and there is no evidence that EN is useful for the diagnosis of learning difficulties.
Abstract: In Bowers (2016), I argued that there are (a) practical problems with educational neuroscience (EN) that explain why there are no examples of EN improving teaching and (b) principled problems with the logic motivating EN that explain why it is likely that there never will be. In the following article, I consider the main responses raised by both Gabrieli (2016) and Howard-Jones et al. (2016) and find them all unconvincing. Following this exchange, there are still no examples of EN providing new insights to teaching in the classroom, there are still no examples of EN providing new insights to remedial instructions for individuals, and, as I detail in this article, there is no evidence that EN is useful for the diagnosis of learning difficulties. The authors have also failed to address the reasons why EN is unlikely to benefit educational outcomes in the future. Psychology, by contrast, can (and has) made important discoveries that can (and should) be used to improve teaching and diagnostic tests for learning difficulties. This is not a debate about whether science is relevant to education, rather it is about what sort of science is relevant. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: It is argued that explanatory and conceptual parsimony is more appropriate than statistical parsimony for evaluating the proposed models, and ways in which functionalist and descriptivist approaches can complement one another are discussed.
Abstract: In this response to the commentary offered by Jonas and Markon (2015) on our earlier work, we address points of agreement and disagreement on the nature and utility of functionalist and descriptivist accounts of personality. Specifically, we argue that explanatory and conceptual parsimony is more appropriate than statistical parsimony for evaluating the proposed models, discuss ways in which functionalist and descriptivist approaches can complement one another, and provide some cautions about interpreting latent traits.

Journal ArticleDOI
TL;DR: After reviewing the existing computational models of cognitive control in the Stroop task, its novel, integrated utility-based model is proposed, which covered the basic congruency effects, performance dynamics and adaptation, as well as the effects resulting from manipulations applied to stimulation and responding, which are yielded by the extant Stroop literature.
Abstract: Cognitive control allows humans to direct and coordinate their thoughts and actions in a flexible way, in order to reach internal goals regardless of interference and distraction. The hallmark test used to examine cognitive control is the Stroop task, which elicits both the weakly learned but goal-relevant and the strongly learned but goal-irrelevant response tendencies, and requires people to follow the former while ignoring the latter. After reviewing the existing computational models of cognitive control in the Stroop task, its novel, integrated utility-based model is proposed. The model uses 3 crucial control mechanisms: response utility reinforcement learning, utility-based conflict evaluation using the Festinger formula for assessing the conflict level, and top-down adaptation of response utility in service of conflict resolution. Their complex, dynamic interaction led to replication of 18 experimental effects, being the largest data set explained to date by 1 Stroop model. The simulations cover the basic congruency effects (including the response latency distributions), performance dynamics and adaptation (including EEG indices of conflict), as well as the effects resulting from manipulations applied to stimulation and responding, which are yielded by the extant Stroop literature.

Journal ArticleDOI
TL;DR: It is argued here that this aspect of the 2-systems theory is inadequate and close attention to the structure of implicit mindreading tasks--for which the theory was specifically designed--indicates that flexible goal attribution is required to succeed.
Abstract: The 2-systems theory developed by Apperly and Butterfill (2009; Butterfill & Apperly, 2013) is an influential approach to explaining the success of infants and young children on implicit false-belief tasks. There is extensive empirical and theoretical work examining many aspects of this theory, but little attention has been paid to the way in which it characterizes goal attribution. We argue here that this aspect of the theory is inadequate. Butterfill and Apperly's characterization of goal attribution is designed to show how goals could be ascribed by infants without representing them as related to other psychological states, and the minimal mindreading system is supposed to operate without employing flexible semantic-executive cognitive processes. But research on infant goal attribution reveals that infants exhibit a high degree of situational awareness that is strongly suggestive of flexible semantic-executive cognitive processing, and infants appear moreover to be sensitive to interrelations between goals, preferences, and beliefs. Further, close attention to the structure of implicit mindreading tasks--for which the theory was specifically designed--indicates that flexible goal attribution is required to succeed. We conclude by suggesting 2 approaches to resolving these problems.

Journal ArticleDOI
TL;DR: This response to the commentary by Michael and Christensen explains how minimal mindreading is compatible with the development of increasingly sophisticated mindreading behaviors that involve both executive functions and general knowledge and sketches 1 approach to a minimal account of goal ascription.
Abstract: In this response to the commentary by Michael and Christensen, we first explain how minimal mindreading is compatible with the development of increasingly sophisticated mindreading behaviors that involve both executive functions and general knowledge and then sketch 1 approach to a minimal account of goal ascription.