scispace - formally typeset
Search or ask a question

Showing papers in "Psychological Review in 2015"


Journal ArticleDOI
TL;DR: The ideal adapter framework is formalized and can be understood as inference under uncertainty about the appropriate generative model for the current talker, thereby facilitating robust speech perception despite the lack of invariance.
Abstract: Successful speech perception requires that listeners map the acoustic signal to linguistic categories. These mappings are not only probabilistic, but change depending on the situation. For example, one talker's /p/ might be physically indistinguishable from another talker's /b/ (cf. lack of invariance). We characterize the computational problem posed by such a subjectively nonstationary world and propose that the speech perception system overcomes this challenge by (a) recognizing previously encountered situations, (b) generalizing to other situations based on previous similar experience, and (c) adapting to novel situations. We formalize this proposal in the ideal adapter framework: (a) to (c) can be understood as inference under uncertainty about the appropriate generative model for the current talker, thereby facilitating robust speech perception despite the lack of invariance. We focus on 2 critical aspects of the ideal adapter. First, in situations that clearly deviate from previous experience, listeners need to adapt. We develop a distributional (belief-updating) learning model of incremental adaptation. The model provides a good fit against known and novel phonetic adaptation data, including perceptual recalibration and selective adaptation. Second, robust speech recognition requires that listeners learn to represent the structured component of cross-situation variability in the speech signal. We discuss how these 2 aspects of the ideal adapter provide a unifying explanation for adaptation, talker-specificity, and generalization across talkers and groups of talkers (e.g., accents and dialects). The ideal adapter provides a guiding framework for future investigations into speech perception and adaptation, and more broadly language comprehension.

429 citations


Journal ArticleDOI
TL;DR: Transactive goal dynamics theory states that relationship partners' goals, pursuit, and outcomes affect each other in a dense network of goal interdependence, ultimately becoming so tightly linked that the 2 partners are most accurately conceptualized as components within a single self-regulating system.
Abstract: Transactive goal dynamics (TGD) theory conceptualizes 2 or more interdependent people as 1 single self-regulating system. Six tenets describe the nature of goal interdependence, predict its emergence, predict when it will lead to positive goal outcomes during and after the relationship, and predict the consequences for the relationship. Both partners in a TGD system possess and pursue self-oriented, partner-oriented, and system-oriented goals, and all of these goals and pursuits are interdependent. TGD theory states that relationship partners' goals, pursuit, and outcomes affect each other in a dense network of goal interdependence, ultimately becoming so tightly linked that the 2 partners are most accurately conceptualized as components within a single self-regulating system.

291 citations


Journal ArticleDOI
TL;DR: It is argued that event memory provides a clearer contrast to semantic memory, which allows for a more comprehensive dimensional account of the structure of explicit memory; and better accounts for laboratory and real-world behavioral and neural results, including those from neuropsychology and neuroimaging, than does episodic memory.
Abstract: An event memory is a mental construction of a scene recalled as a single occurrence. It therefore requires the hippocampus and ventral visual stream needed for all scene construction. The construction need not come with a sense of reliving or be made by a participant in the event, and it can be a summary of occurrences from more than one encoding. The mental construction, or physical rendering, of any scene must be done from a specific location and time; this introduces a ‘self’ located in space and time, which is a necessary, but need not be a sufficient, condition for a sense of reliving. We base our theory on scene construction rather than reliving because this allows the integration of many literatures and because there is more accumulated knowledge about scene construction’s phenomenology, behavior, and neural basis. Event memory differs from episodic memory in that it does not conflate the independent dimensions of whether or not a memory is relived, is about the self, is recalled voluntarily, or is based on a single encoding with whether it is recalled as a single occurrence of a scene. Thus, we argue that event memory provides a clearer contrast to semantic memory, which also can be about the self, be recalled voluntarily, and be from a unique encoding; allows for a more comprehensive dimensional account of the structure of explicit memory; and better accounts for laboratory and real world behavioral and neural results, including those from neuropsychology and neuroimaging, than does episodic memory.

241 citations


Journal ArticleDOI
TL;DR: A computational model of the rodent medial prefrontal cortex is developed that accounts for the behavioral sequelae of ACC damage, unifies many of the cognitive functions attributed to it, and provides a solution to an outstanding question in cognitive control research-how the control system determines and motivates what tasks to perform.
Abstract: The anterior cingulate cortex (ACC) has been the focus of intense research interest in recent years. Although separate theories relate ACC function variously to conflict monitoring, reward processing, action selection, decision making, and more, damage to the ACC mostly spares performance on tasks that exercise these functions, indicating that they are not in fact unique to the ACC. Further, most theories do not address the most salient consequence of ACC damage: impoverished action generation in the presence of normal motor ability. In this study we develop a computational model of the rodent medial prefrontal cortex that accounts for the behavioral sequelae of ACC damage, unifies many of the cognitive functions attributed to it, and provides a solution to an outstanding question in cognitive control research-how the control system determines and motivates what tasks to perform. The theory derives from recent developments in the formal study of hierarchical control and learning that highlight computational efficiencies afforded when collections of actions are represented based on their conjoint goals. According to this position, the ACC utilizes reward information to select tasks that are then accomplished through top-down control over action selection by the striatum. Computational simulations capture animal lesion data that implicate the medial prefrontal cortex in regulating physical and cognitive effort. Overall, this theory provides a unifying theoretical framework for understanding the ACC in terms of the pivotal role it plays in the hierarchical organization of effortful behavior.

174 citations


Journal ArticleDOI
TL;DR: An appraisal theory of vicarious emotional experiences, including empathy, based on appraisal theories of emotion is proposed and it is discussed how this framework can predict empathic emotion matching and also the experience of emotions for others that do not match what they feel.
Abstract: Empathy, feeling what others feel, is regarded as a special phenomenon that is separate from other emotional experiences. Emotion theories say little about feeling emotions for others and empathy theories say little about how feeling emotions for others relates to normal firsthand emotional experience. Current empathy theories focus on how we feel emotions for others who feel the same thing, but not how we feel emotions for others that they do not feel, such as feeling angry for someone who is sad or feeling embarrassed for someone who is self-assured. We propose an appraisal theory of vicarious emotional experiences, including empathy, based on appraisal theories of emotion. According to this theory, emotions for others are based on how we evaluate their situations, just as firsthand emotions are based on how we evaluate our own situations. We discuss how this framework can predict empathic emotion matching and also the experience of emotions for others that do not match what they feel. The theory treats empathy as a normal part of emotional experience.

152 citations


Journal ArticleDOI
TL;DR: A functionalist understanding of trait covariation as arising through functionalist or process variables has implications for many basic issues in personality psychology, such as how personality traits should be measured, mechanisms for personality stability and change, and the nature of personality traits more generally.
Abstract: Factors identified in investigations of trait structure (e.g., the Big Five) are sometimes understood as explanations or sources of the covariation of distinct behavioral traits, as when extraversion is suggested to underlie the covariation of assertiveness and sociability. Here, we detail how trait covariation can alternatively be understood as arising from units common to functionalist and process frameworks, such as self-efficacies, expectancies, values, and goals. Specifically, the expected covariation between 2 behavioral traits should be increased when a specific process variable tends to indicate the functionality of both traits simultaneously. In 2 empirical illustrations, we identify a wide array of specific process variables associated with several Big Five-related behavioral traits simultaneously, and which are thus likely sources of their covariation. Many of these, such as positive interpersonal expectancies, self-regulatory skills, and preference for order, relate similarly to a broad range of trait perceptions in both studies, and across both self- and peer-reports. We also illustrate how this understanding of trait covariation provides a somewhat novel explanation of why some traits are uncorrelated. As we discuss, a functionalist understanding of trait covariation as arising through functionalist or process variables has implications for many basic issues in personality psychology, such as how personality traits should be measured, mechanisms for personality stability and change, and the nature of personality traits more generally.

142 citations


Journal ArticleDOI
TL;DR: A new model for bulimia nervosa is offered that explains both the initial impulsive nature of binge eating and purging, as well as the compulsive quality of the fully developed disorder.
Abstract: This article offers a new model for bulimia nervosa (BN) that explains both the initial impulsive nature of binge eating and purging, as well as the compulsive quality of the fully developed disorder. The model is based on a review of advances in research on BN and advances in relevant basic psychological science. It integrates transdiagnostic personality risk, eating-disorder-specific risk, reinforcement theory, cognitive neuroscience, and theory drawn from the drug addiction literature. We identify both a state-based and a trait-based risk pathway, and we then propose possible state-by-trait interaction risk processes. The state-based pathway emphasizes depletion of self-control. The trait-based pathway emphasizes transactions between the trait of negative urgency (the tendency to act rashly when distressed) and high-risk psychosocial learning. We then describe a process by which initially impulsive BN behaviors become compulsive over time, and we consider the clinical implications of our model. (PsycINFO Database Record

133 citations


Journal ArticleDOI
TL;DR: This article develops a novel extension of the well-studied drift diffusion model (DDM) that uses single-trial brain activity patterns to inform the behavioral model parameters and uses this model to provide an explanation for how activity in a brain region affects the dynamics of the underlying decision process through mechanisms assumed by the model.
Abstract: Trial-to-trial fluctuations in an observer’s state of mind have a direct influence on their behavior. However, characterizing an observer’s state of mind is difficult to do with behavioral data alone, particularly on a single-trial basis. In this article, we extend a recently developed hierarchical Bayesian framework for integrating neurophysiological information into cognitive models. In so doing, we develop a novel extension of the well-studied drift diffusion model (DDM) that uses single-trial brain activity patterns to inform the behavioral model parameters. We first show through simulation how the model outperforms the traditional DDM in a prediction task with sparse data. We then fit the model to experimental data consisting of a speed-accuracy manipulation on a random dot motion task. We use our cognitive modeling approach to show how prestimulus brain activity can be used to simultaneously predict response accuracy and response time. We use our model to provide an explanation for how activity in a brain region affects the dynamics of the underlying decision process through mechanisms assumed by the model. Finally, we show that our model performs better than the traditional DDM through a cross-validation test. By combining accuracy, response time, and the blood oxygen level-dependent response into a unified model, the link between cognitive abstraction and neuroimaging can be better understood.

131 citations


Journal ArticleDOI
TL;DR: The hypothesis that a scale-invariant representation of history could support performance in a variety of learning and memory tasks and a growing body of neural data suggests that neural representations in several brain regions have qualitative properties predicted by the representation of temporal history are pursued.
Abstract: This article pursues the hypothesis that a scale-invariant representation of history could support performance in a variety of learning and memory tasks. This representation maintains a conjunctive representation of what happened when that grows continuously less accurate for events further and further in the past. Simple behavioral models using a few operations, including scanning, matching and a "jump back in time" that recovers previous states of the history, describe a range of behavioral phenomena. These behavioral applications include canonical results from the judgment of recency task over short and long scales, the recency and contiguity effect across scales in episodic recall, and temporal mapping phenomena in conditioning. A growing body of neural data suggests that neural representations in several brain regions have qualitative properties predicted by the representation of temporal history. Taken together, these results suggest that a scale-invariant representation of temporal history may serve as a cornerstone of a physical model of cognition in learning and memory.

123 citations


Journal ArticleDOI
TL;DR: This work presents a substantially revised theory in which memory accumulates across multiple experimental lists, and temporal context is used both to focus retrieval on a target list, and to censor retrieved information when its match to the current context indicates that it was learned in a nontarget list.
Abstract: The human memory system is remarkable in its capacity to focus its search on items learned in a given context. This capacity can be so precise that many leading models of human memory assume that only those items learned in the context of a recently studied list compete for recall. We sought to extend the explanatory scope of these models to include not only intralist phenomena, such as primacy and recency effects, but also interlist phenomena such as proactive and retroactive interference. Building on retrieved temporal context models of memory search (e.g., Polyn, Norman, & Kahana, 2009), we present a substantially revised theory in which memory accumulates across multiple experimental lists, and temporal context is used both to focus retrieval on a target list, and to censor retrieved information when its match to the current context indicates that it was learned in a nontarget list. We show how the resulting model can simultaneously account for a wide range of intralist and interlist phenomena, including the pattern of prior-list intrusions observed in free recall, build-up of and release from proactive interference, and the ability to selectively target retrieval of items on specific prior lists (Jang & Huber, 2008; Shiffrin, 1970). In a new experiment, we verify that subjects' error monitoring processes are consistent with those predicted by the model.

120 citations


Journal ArticleDOI
TL;DR: The complementary processes account developed in this article acknowledges early, gradual development of the ability to form, retain, and later retrieve memories of personally relevant past events, as well as an accelerated rate of forgetting in childhood relative to adulthood.
Abstract: Personal-episodic or autobiographical memories are an important source of evidence for continuity of self over time. Numerous studies conducted with adults have revealed a relative paucity of personal-episodic or autobiographical memories of events from the first 3 to 4 years of life, with a seemingly gradual increase in the number of memories until approximately age 7 years, after which an adult distribution has been assumed. Historically, this so-called infantile amnesia or childhood amnesia has been attributed either to late development of personal-episodic or autobiographical memory (implying its absence in the early years of life) or to an emotional, cognitive, or linguistic event that renders early autobiographical memories inaccessible to later recollection. However, neither type of explanation alone can fully account for the shape of the distribution of autobiographical memories early in life. In contrast, the complementary processes account developed in this article acknowledges early, gradual development of the ability to form, retain, and later retrieve memories of personally relevant past events, as well as an accelerated rate of forgetting in childhood relative to adulthood. The adult distribution of memories is achieved as (a) the quality of memory traces increases, through addition of more, better elaborated, and more tightly integrated personal-episodic or autobiographical features; and (b) the vulnerability of mnemonic traces decreases, as a result of more efficient and effective neural, cognitive, and specifically mnemonic processes, thus slowing the rate of forgetting. The perspective brings order to an array of findings from the adult and developmental literatures.

Journal ArticleDOI
TL;DR: This study shows that across wide classes of dynamic binary choice environments, focusing only on experiences that followed the same sequence of outcomes preceding the current task is more effective than focusing on the most recent experiences.
Abstract: Many behavioral phenomena, including underweighting of rare events and probability matching, can be the product of a tendency to rely on small samples of experiences. Why would small samples be used, and which experiences are likely to be included in these samples? Previous studies suggest that a cognitively efficient reliance on the most recent experiences can be very effective. We explore a very different and more cognitively demanding process explaining the tendency to rely on small samples: exploitation of environmental regularities. The first part of our study shows that across wide classes of dynamic binary choice environments, focusing only on experiences that followed the same sequence of outcomes preceding the current task is more effective than focusing on the most recent experiences. The second part of our study examines the psychological significance of these sequence-based rules. It shows that these tractable rules reproduce well-known indications of sensitivity to sequences and predict a nontrivial wavy recency effect of rare events. Analysis of published data supports this wavy recency prediction, but suggests an even wavier effect than these sequence-based rules predict. This pattern, and the main behavioral phenomena documented in basic decisions from experience and probability learning tasks, can be captured with a similarity-based model assuming that people follow sequences of outcomes most of the time but sometimes respond to trends. We conclude with theoretical notes on similarity-based learning.

Journal ArticleDOI
TL;DR: It is shown that the blocked-input model accounts for behavioral data as accurately as the original interactive race model and predicts aspects of the physiological data more accurately, and a model in which fixation activity is boosted when a stop signal occurs fits as well as the blocked input model but predicts very high steady-state fixation activity after the response is inhibited.
Abstract: The interactive race model of saccadic countermanding assumes that response inhibition results from an interaction between a go unit, identified with gaze-shifting neurons, and a stop unit, identified with gaze-holding neurons, in which activation of the stop unit inhibits the growth of activation in the go unit to prevent it from reaching threshold. The interactive race model accounts for behavioral data and predicts physiological data in monkeys performing the stop-signal task. We propose an alternative model that assumes that response inhibition results from blocking the input to the go unit. We show that the blocked-input model accounts for behavioral data as accurately as the original interactive race model and predicts aspects of the physiological data more accurately. We extend the models to address the steady-state fixation period before the go stimulus is presented and find that the blocked-input model fits better than the interactive race model. We consider a model in which fixation activity is boosted when a stop signal occurs and find that it fits as well as the blocked input model but predicts very high steady-state fixation activity after the response is inhibited. We discuss the alternative linking propositions that connect computational models to neural mechanisms, the lessons to be learned from model mimicry, and generalization from countermanding saccades to countermanding other kinds of responses.

Journal ArticleDOI
TL;DR: There was little effect of PM demands on evidence accumulation rates, but PM demands consistently increased the evidence required for ongoing task response selection (response thresholds), consistent with a delay theory account of costs.
Abstract: Event-based prospective memory (PM) requires a deferred action to be performed when a target event is encountered in the future. Individuals are often slower to perform a concurrent ongoing task when they have PM task requirements relative to performing the ongoing task in isolation. Theories differ in their detailed interpretations of this PM cost, but all assume that the PM task shares limited-capacity resources with the ongoing task. In what was interpreted as support of this core assumption, diffusion model fits reported by Boywitt and Rummel (2012) and Horn, Bayen, and Smith (2011) indicated that PM demands reduced the rate of accumulation of evidence about ongoing task choices. We revaluate this support by fitting both the diffusion and linear ballistic accumulator (Brown & Heathcote, 2008) models to these same data sets and 2 new data sets better suited to model fitting. There was little effect of PM demands on evidence accumulation rates, but PM demands consistently increased the evidence required for ongoing task response selection (response thresholds). A further analysis of data reported by Lourenco, White, and Maylor (2013) found that participants differentially adjusted their response thresholds to slow responses associated with stimuli potentially containing PM targets. These findings are consistent with a delay theory account of costs, which contends that individuals slow ongoing task responses to allow more time for PM response selection to occur. Our results call for a fundamental reevaluation of current capacity-sharing theories of PM cost that until now have dominated the PM literature.

Journal ArticleDOI
TL;DR: A computational model to account for morpheme combination at the meaning level is introduced, which is data-driven, theoretically sound, and empirically supported, and it makes predictions that open new research avenues in the domain of semantic processing.
Abstract: The present work proposes a computational model of morpheme combination at the meaning level. The model moves from the tenets of distributional semantics, and assumes that word meanings can be effectively represented by vectors recording their co-occurrence with other words in a large text corpus. Given this assumption, affixes are modeled as functions (matrices) mapping stems onto derived forms. Derived-form meanings can be thought of as the result of a combinatorial procedure that transforms the stem vector on the basis of the affix matrix (e.g., the meaning of nameless is obtained by multiplying the vector of name with the matrix of -less). We show that this architecture accounts for the remarkable human capacity of generating new words that denote novel meanings, correctly predicting semantic intuitions about novel derived forms. Moreover, the proposed compositional approach, once paired with a whole-word route, provides a new interpretative framework for semantic transparency, which is here partially explained in terms of ease of the combinatorial procedure and strength of the transformation brought about by the affix. Model-based predictions are in line with the modulation of semantic transparency on explicit intuitions about existing words, response times in lexical decision, and morphological priming. In conclusion, we introduce a computational model to account for morpheme combination at the meaning level. The model is data-driven, theoretically sound, and empirically supported, and it makes predictions that open new research avenues in the domain of semantic processing. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: The issue of attitude-behavior relations is revisited in light of recent work on motivation and the psychology of goals and a set of mediating processes that transpire between attitudes and behavior are specified.
Abstract: The issue of attitude-behavior relations is revisited in light of recent work on motivation and the psychology of goals. It is suggested that for object-attitudes to drive a specific behavior, a chain of contingencies must be realized: Liking must be transmuted into wanting, wanting must evolve into a goal, the goal must be momentarily dominant, and the specific behavior must be chosen as means of goal pursuit. Our model thus specifies a set of mediating processes that transpire between attitudes and behavior. Prior theories of attitude-behavior relations are examined from the present perspective, and its conceptual and empirical implications are noted.

Journal ArticleDOI
TL;DR: The II theory provides a unified theoretical framework for understanding psychopathic dysfunction and integrates principle tenets of affective and cognitive perspectives and accommodates evidence regarding connectivity abnormalities in psychopathy through its network theoretical perspective.
Abstract: This article introduces a novel theoretical framework for psychopathy that bridges dominant affective and cognitive models. According to the proposed impaired integration (II) framework of psychopathic dysfunction, topographical irregularities and abnormalities in neural connectivity in psychopathy hinder the complex process of information integration. Central to the II theory is the notion that psychopathic individuals are "'wired up' differently" (Hare, Williamson, & Harpur, 1988, p. 87). Specific theoretical assumptions include decreased functioning of the Salience and Default Mode Networks, normal functioning in executive control networks, and less coordination and flexible switching between networks. Following a review of dominant models of psychopathy, we introduce our II theory as a parsimonious account of behavioral and brain irregularities in psychopathy. The II theory provides a unified theoretical framework for understanding psychopathic dysfunction and integrates principle tenets of affective and cognitive perspectives. Moreover, it accommodates evidence regarding connectivity abnormalities in psychopathy through its network theoretical perspective. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: It is concluded that theoretical reliance on articulatory rehearsal as a causative agent in memory may be unwise and that explanatory appeals to rehearsal are insufficient unless buttressed by quantitative modeling.
Abstract: We examine the explanatory roles that have been ascribed to various forms of rehearsal or refreshing in short-term memory (STM) and working memory paradigms, usually in conjunction with the assumption that memories decay over time if they are not rehearsed. Notwithstanding the popularity of the rehearsal notion, there have been few detailed examinations of its underlying mechanisms. We explicitly implemented rehearsal in a decay model and explored its role by simulation in several benchmark paradigms ranging from immediate serial recall to complex span and delayed recall. The results show that articulatory forms of rehearsal often fail to counteract temporal decay. Rapid attentional refreshing performs considerably better, but so far there is scant empirical evidence that people engage in refreshing during STM tasks. Combining articulatory rehearsal and refreshing as 2 independent maintenance processes running in parallel leads to worse performance than refreshing alone. We conclude that theoretical reliance on articulatory rehearsal as a causative agent in memory may be unwise and that explanatory appeals to rehearsal are insufficient unless buttressed by quantitative modeling.

Journal ArticleDOI
TL;DR: This work contends on a theoretical level that the metrics employed to measure observer sensitivity in modern vigilance tasks (derived from signal detection theory) are inappropriate and largely uninterpretable and presents the results of an experiment that demonstrates that shifts in response bias over time can masquerade as a loss in sensitivity.
Abstract: It is well known that when human observers must monitor for rare but critical events, probability of detection tends to wane over time, a phenomenon known as the "vigilance decrement." Over 60 years of empirical study on this topic has culminated in the general consensus that performance suffers due to a loss in observers' ability to distinguish signal from noise (a loss in sensitivity) provided that the task loads memory and stimuli are presented at a relatively high rate. We challenge this assertion on 2 fronts: First, we contend on a theoretical level that the metrics employed to measure observer sensitivity in modern vigilance tasks (derived from signal detection theory) are inappropriate and largely uninterpretable. This contention is supported by an evaluation of recent empirical work in the vigilance domain. Second, we present the results of an experiment that demonstrates that shifts in response bias (the observer's "willingness to respond") over time can masquerade as a loss in sensitivity. Consequently, the basic underlying cause of the vigilance decrement is actually unclear, and may simply reflect a shift in response criterion rather than sensitivity. The theoretical, as well as practical implications of these conclusions are discussed with respect to sustained attention in general, and vigilance in particular. (PsycINFO Database Record Language: en

Journal ArticleDOI
TL;DR: The authors presented a model that directly parameterizes the matches and mismatches to the item and context cues, which enables estimation of the magnitude of each interference contribution (item noise, context noise, and background noise).
Abstract: A powerful theoretical framework for exploring recognition memory is the global matching framework, in which a cue's memory strength reflects the similarity of the retrieval cues being matched against the contents of memory simultaneously. Contributions at retrieval can be categorized as matches and mismatches to the item and context cues, including the self match (match on item and context), item noise (match on context, mismatch on item), context noise (match on item, mismatch on context), and background noise (mismatch on item and context). We present a model that directly parameterizes the matches and mismatches to the item and context cues, which enables estimation of the magnitude of each interference contribution (item noise, context noise, and background noise). The model was fit within a hierarchical Bayesian framework to 10 recognition memory datasets that use manipulations of strength, list length, list strength, word frequency, study-test delay, and stimulus class in item and associative recognition. Estimates of the model parameters revealed at most a small contribution of item noise that varies by stimulus class, with virtually no item noise for single words and scenes. Despite the unpopularity of background noise in recognition memory models, background noise estimates dominated at retrieval across nearly all stimulus classes with the exception of high frequency words, which exhibited equivalent levels of context noise and background noise. These parameter estimates suggest that the majority of interference in recognition memory stems from experiences acquired before the learning episode.

Journal ArticleDOI
TL;DR: A definition of "temporal window" across different paradigms for measuring its width is proposed based on the TWIN model and confirmed the authors' hypothesis that the temporal window in an RT task tends to be wider than in a temporal-order judgment (TOJ) task.
Abstract: Even though visual and auditory information of 1 and the same event often do not arrive at the sensory receptors at the same time, due to different physical transmission times of the modalities, the brain maintains a unitary perception of the event, at least within a certain range of sensory arrival time differences. The properties of this "temporal window of integration" (TWIN), its recalibration due to task requirements, attention, and other variables, have recently been investigated intensively. Up to now, however, there has been no consistent definition of "temporal window" across different paradigms for measuring its width. Here we propose such a definition based on our TWIN model (Colonius & Diederich, 2004). It applies to judgments of temporal order (or simultaneity) as well as to reaction time (RT) paradigms. Reanalyzing data from Megevand, Molholm, Nayak, & Foxe (2013) by fitting the TWIN model to data from both paradigms, we confirmed the authors' hypothesis that the temporal window in an RT task tends to be wider than in a temporal-order judgment (TOJ) task. This first step toward a unified concept of TWIN should be a valuable tool in guiding investigations of the neural and cognitive bases of this so-far-somewhat elusive concept.

Journal ArticleDOI
TL;DR: It is found evidence indicative of optimal foraging policies in memory search that mirror search in physical environments and how these patterns could also emerge from a random walk applied to a network representation of memory based on human free-association norms.
Abstract: In recent work exploring the semantic fluency task, we found evidence indicative of optimal foraging policies in memory search that mirror search in physical environments. We determined that a 2-stage cue-switching model applied to a memory representation from a semantic space model best explained the human data. Abbott, Austerweil, and Griffiths demonstrate how these patterns could also emerge from a random walk applied to a network representation of memory based on human free-association norms. However, a major representational issue limits any conclusions that can be drawn about the process model comparison: Our process model operated on a memory space constructed from a learning model, whereas their model used human behavioral data from a task that is quite similar to the behavior they attempt to explain. Predicting semantic fluency (e.g., how likely it is to say cat after dog in a sequence of animals) from free association (how likely it is to say cat when given dog as a cue) should be possible with a relatively simple retrieval mechanism. The 2 tasks both tap memory, but they also share a common process of retrieval. Assuming that semantic memory is a network from free-association behavior embeds variance due to the shared retrieval process directly into the representation. A simple process mechanism is then sufficient to simulate semantic fluency because much of the requisite process complexity may already be hidden in the representation.

Journal ArticleDOI
TL;DR: This commentary relates some correspondence with Miller on his article and concludes with a call to avoid self-censorship of the authors' less conventional ideas.
Abstract: Miller's (1956) article about storage capacity limits, "The Magical Number Seven Plus or Minus Two . . .," is one of the best-known articles in psychology. Though influential in several ways, for about 40 years it was oddly followed by rather little research on the numerical limit of capacity in working memory, or on the relation between 3 potentially related phenomena that Miller described. Given that the article was written in a humorous tone and was framed around a tongue-in-cheek premise (persecution by an integer), I argue that it may have inadvertently stymied progress on these topics as researchers attempted to avoid ridicule. This commentary relates some correspondence with Miller on his article and concludes with a call to avoid self-censorship of our less conventional ideas. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: It is argued that XSL is not just a mechanism for word-to-meaning mapping, but that it provides strong cues for proto-lexical word segmentation, and results from simulations show that the model is not only capable of replicating behavioral data on word learning in artificial languages, but also shows effective learning of word segments and their meanings from continuous speech.
Abstract: Human infants learn meanings for spoken words in complex interactions with other people, but the exact learning mechanisms are unknown. Among researchers, a widely studied learning mechanism is called cross-situational learning (XSL). In XSL, word meanings are learned when learners accumulate statistical information between spoken words and co-occurring objects or events, allowing the learner to overcome referential uncertainty after having sufficient experience with individually ambiguous scenarios. Existing models in this area have mainly assumed that the learner is capable of segmenting words from speech before grounding them to their referential meaning, while segmentation itself has been treated relatively independently of the meaning acquisition. In this article, we argue that XSL is not just a mechanism for word-to-meaning mapping, but that it provides strong cues for proto-lexical word segmentation. If a learner directly solves the correspondence problem between continuous speech input and the contextual referents being talked about, segmentation of the input into word-like units emerges as a by-product of the learning. We present a theoretical model for joint acquisition of proto-lexical segments and their meanings without assuming a priori knowledge of the language. We also investigate the behavior of the model using a computational implementation, making use of transition probability-based statistical learning. Results from simulations show that the model is not only capable of replicating behavioral data on word learning in artificial languages, but also shows effective learning of word segments and their meanings from continuous speech. Moreover, when augmented with a simple familiarity preference during learning, the model shows a good fit to human behavioral data in XSL tasks. These results support the idea of simultaneous segmentation and meaning acquisition and show that comprehensive models of early word segmentation should take referential word meanings into account. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: It is shown that confidence-rating judgments are consistent with a discrete-state account and can be compared on the basis of critical tests that invoke only minimal assumptions.
Abstract: An ongoing discussion in the recognition-memory literature concerns the question of whether recognition judgments reflect a direct mapping of graded memory representations (a notion that is instantiated by signal detection theory) or whether they are mediated by a discrete-state representation with the possibility of complete information loss (a notion that is instantiated by threshold models). These 2 accounts are usually evaluated by comparing their (penalized) fits to receiver operating characteristic data, a procedure that is predicated on substantial auxiliary assumptions, which if violated can invalidate results. We show that the 2 accounts can be compared on the basis of critical tests that invoke only minimal assumptions. Using previously published receiver operating characteristic data, we show that confidence-rating judgments are consistent with a discrete-state account. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: A new model is presented and it is shown that it accounts better for human inferences than several alternative models and provides an accurate account of both mean human judgments and the judgments of individuals.
Abstract: When people want to identify the causes of an event, assign credit or blame, or learn from their mistakes, they often reflect on how things could have gone differently. In this kind of reasoning, one considers a counterfactual world in which some events are different from their real-world counterparts and considers what else would have changed. Researchers have recently proposed several probabilistic models that aim to capture how people do (or should) reason about counterfactuals. We present a new model and show that it accounts better for human inferences than several alternative models. Our model builds on the work of Pearl (2000), and extends his approach in a way that accommodates backtracking inferences and that acknowledges the difference between counterfactual interventions and counterfactual observations. We present 6 new experiments and analyze data from 4 experiments carried out by Rips (2010), and the results suggest that the new model provides an accurate account of both mean human judgments and the judgments of individuals. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: It is shown that the 3 effects of MLBA are very fragile and that only a small subset of people shows all 3 simultaneously, and the predictions that Tsetsos et al. generated from the MLBA model turn out to match closely real data in a new experiment.
Abstract: Trueblood, Brown, and Heathcote (2014) developed a new model, called the multiattribute linear ballistic accumulator (MLBA), to explain contextual preference reversals in multialternative choice. MLBA was shown to provide good accounts of human behavior through both qualitative analyses and quantitative fitting of choice data. Tsetsos, Chater, and Usher (2015) investigated the ability of MLBA to simultaneously capture 3 prominent context effects (attraction, compromise, and similarity). They concluded that MLBA must set a "fine balance" of competing forces to account for all 3 effects simultaneously and that its predictions are sensitive to the position of the stimuli in the attribute space. Through a new experiment, we show that the 3 effects are very fragile and that only a small subset of people shows all 3 simultaneously. Thus, the predictions that Tsetsos et al. generated from the MLBA model turn out to match closely real data in a new experiment. Support for these predictions provides strong evidence for the MLBA. A corollary is that a model that can "robustly" capture all 3 effects simultaneously is not necessarily a good model. Rather, a good model captures patterns found in human data, but cannot accommodate patterns that are not found.

Journal ArticleDOI
TL;DR: Because BHG provides a principled quantification of the plausibility of grouping interpretations over a wide range of grouping problems, it is argued that it provides an appealing unifying account of the elusive Gestalt notion of Prägnanz.
Abstract: We propose a novel framework for perceptual grouping based on the idea of mixture models, called Bayesian hierarchical grouping (BHG) In BHG, we assume that the configuration of image elements is generated by a mixture of distinct objects, each of which generates image elements according to some generative assumptions Grouping, in this framework, means estimating the number and the parameters of the mixture components that generated the image, including estimating which image elements are "owned" by which objects We present a tractable implementation of the framework, based on the hierarchical clustering approach of Heller and Ghahramani (2005) We illustrate it with examples drawn from a number of classical perceptual grouping problems, including dot clustering, contour integration, and part decomposition Our approach yields an intuitive hierarchical representation of image elements, giving an explicit decomposition of the image into mixture components, along with estimates of the probability of various candidate decompositions We show that BHG accounts well for a diverse range of empirical data drawn from the literature Because BHG provides a principled quantification of the plausibility of grouping interpretations over a wide range of grouping problems, we argue that it provides an appealing unifying account of the elusive Gestalt notion of Pragnanz

Journal ArticleDOI
TL;DR: An evolutionary model of risk-sensitive behavior is built that predicts that risk preferences should be both path dependent and affected by the decision maker's current state, and shows that the fourfold pattern of risk attitudes may be adaptive in an environment in which conditions vary stochastically but are autocorrelated in time.
Abstract: A striking feature of human decision making is the fourfold pattern of risk attitudes, involving risk-averse behavior in situations of unlikely losses and likely gains, but risk-seeking behavior in response to likely losses and unlikely gains. Current theories to explain this pattern assume particular psychological processes to reproduce empirical observations, but do not address whether it is adaptive for the decision maker to respond to risk in this way. Here, drawing on insights from behavioral ecology, we build an evolutionary model of risk-sensitive behavior, to investigate whether particular types of environmental conditions could favor a fourfold pattern of risk attitudes. We consider an individual foraging in a changing environment, where energy is needed to prevent starvation and build up reserves for reproduction. The outcome, in terms of reproductive value (a rigorous measure of evolutionary success), of a one-off choice between a risky and a safe gain, or between a risky and a safe loss, determines the risk-sensitive behavior we should expect to see in this environment. Our results show that the fourfold pattern of risk attitudes may be adaptive in an environment in which conditions vary stochastically but are autocorrelated in time. In such an environment the current options provide information about the likely environmental conditions in the future, which affect the optimal pattern of risk sensitivity. Our model predicts that risk preferences should be both path dependent and affected by the decision maker's current state.

Journal ArticleDOI
TL;DR: It is shown that reaction time (RT) distributions have the same shape across conditions or groups, but this is highly unlikely if the RT is the sum of the stochastically independent durations of 2 or more stages that are influenced selectively by different factors, or 1 of which is influenced selective by some factor.
Abstract: It is sometimes suggested that reaction-time (RT) distributions ha ve the same shape across conditions or groups. In this note we sho wt hat this is highly unlikely if the R Ti st he sum of the stochastically independent durations of tw oo rm ore stages (sequential processes) (a) that are influenced selecti vely by different factors, or (b) one of which is influenced selecti vely by some factor .W ep rovide an example of substantial shape differences in R Td ata from a flash-detection experiment, data that have been shown to satisfy the requirements of type (a). Ignoring these requirements, we also note that in a large range of instances reviewed by Matzk eW Eckert, 2011; Myerson et al., 2003a; Myerson et al., 2003b; Salthouse, 1996; Sleiman-Malkoun et al., 2013). In effect, with increasing age, time runs more slowly .R ouder et al. (2010) discuss other considerations that lead to shape in variance. Ratclif fa nd McKoon (2008) use the approximate linearity of Q-Q plots to argue that for diffusion model predictions and some data sets, the shapes of R Td istributions are approximately invariant across experimental conditions and experiments (p. 895). And, according to Ratclif fa nd Smith (2010, p. 90), "Invariance of distribution shape is one of the most powerful constraints on models of RT distributions. . . . That the diffusion model predicts this in variance is a strong argument in support of its use in performing process decomposition of R Td ata." The primary purpose of this note is to sho wt hat for a process organized in stages that have stochastically independent durations and are selecti vely influenced by experimental factors, it is highly unlikely that the distributions of RTs in se veral conditions in an experiment can have the same shape.