scispace - formally typeset
Search or ask a question

Showing papers in "Psychonomic Bulletin & Review in 2012"


Journal ArticleDOI
TL;DR: It is demonstrated that collecting data from uncompensated, anonymous, unsupervised, self-selected participants need not reduce data quality, even for demanding cognitive and perceptual experiments.
Abstract: With the increasing sophistication and ubiquity of the Internet, behavioral research is on the cusp of a revolution that will do for population sampling what the computer did for stimulus control and measurement. It remains a common assumption, however, that data from self-selected Web samples must involve a trade-off between participant numbers and data quality. Concerns about data quality are heightened for performance-based cognitive and perceptual measures, particularly those that are timed or that involve complex stimuli. In experiments run with uncompensated, anonymous participants whose motivation for participation is unknown, reduced conscientiousness or lack of focus could produce results that would be difficult to interpret due to decreased overall performance, increased variability of performance, or increased measurement noise. Here, we addressed the question of data quality across a range of cognitive and perceptual tests. For three key performance metrics—mean performance, performance variance, and internal reliability—the results from self-selected Web samples did not differ systematically from those obtained from traditionally recruited and/or lab-tested samples. These findings demonstrate that collecting data from uncompensated, anonymous, unsupervised, self-selected participants need not reduce data quality, even for demanding cognitive and perceptual experiments.

509 citations


Journal ArticleDOI
TL;DR: A default Bayesian hypothesis test for the presence of a correlation or a partial correlation is proposed, which can quantify evidence in favor of the null hypothesis and allows researchers to monitor the test results as the data come in.
Abstract: We propose a default Bayesian hypothesis test for the presence of a correlation or a partial correlation. The test is a direct application of Bayesian techniques for variable selection in regression models. The test is easy to apply and yields practical advantages that the standard frequentist tests lack; in particular, the Bayesian test can quantify evidence in favor of the null hypothesis and allows researchers to monitor the test results as the data come in. We illustrate the use of the Bayesian correlation test with three examples from the psychological literature. Computer code and example data are provided in the journal archives.

442 citations


Journal ArticleDOI
TL;DR: Self-testing, rereading, and scheduling of study play important roles in real-world student achievement; use of self-testing and rereading were both positively associated with GPA.
Abstract: Previous studies, such as those by Kornell and Bjork (Psychonomic Bulletin & Review, 14:219-224, 2007) and Karpicke, Butler, and Roediger (Memory, 17:471-479, 2009), have surveyed college students' use of various study strategies, including self-testing and rereading. These studies have documented that some students do use self-testing (but largely for monitoring memory) and rereading, but the researchers did not assess whether individual differences in strategy use were related to student achievement. Thus, we surveyed 324 undergraduates about their study habits as well as their college grade point average (GPA). Importantly, the survey included questions about self-testing, scheduling one's study, and a checklist of strategies commonly used by students or recommended by cognitive research. Use of self-testing and rereading were both positively associated with GPA. Scheduling of study time was also an important factor: Low performers were more likely to engage in late-night studying than were high performers; massing (vs. spacing) of study was associated with the use of fewer study strategies overall; and all students-but especially low performers-were driven by impending deadlines. Thus, self-testing, rereading, and scheduling of study play important roles in real-world student achievement.

336 citations


Journal ArticleDOI
TL;DR: A new computational model for the complex-span task, the most popular task for studying working memory, is introduced, which accounts for benchmark findings in four areas: effects of processing pace, processing difficulty, and number of processing steps.
Abstract: This article introduces a new computational model for the complex-span task, the most popular task for studying working memory. SOB-CS is a two-layer neural network that associates distributed item representations with distributed, overlapping position markers. Memory capacity limits are explained by interference from a superposition of associations. Concurrent processing interferes with memory through involuntary encoding of distractors. Free time in-between distractors is used to remove irrelevant representations, thereby reducing interference. The model accounts for benchmark findings in four areas: (1) effects of processing pace, processing difficulty, and number of processing steps; (2) effects of serial position and error patterns; (3) effects of different kinds of item-distractor similarity; and (4) correlations between span tasks. The model makes several new predictions in these areas, which were confirmed experimentally.

282 citations


Journal ArticleDOI
TL;DR: There is indeed an active advantage in spatial learning, which manifests itself in the task-dependent acquisition of route and survey knowledge, which may explain the mixed results in desktop virtual reality.
Abstract: It seems intuitively obvious that active exploration of a new environment will lead to better spatial learning than will passive exposure. However, the literature on this issue is decidedly mixed—in part, because the concept itself is not well defined. We identify five potential components of active spatial learning and review the evidence regarding their role in the acquisition of landmark, route, and survey knowledge. We find that (1) idiothetic information in walking contributes to metric survey knowledge, (2) there is little evidence as yet that decision making during exploration contributes to route or survey knowledge, (3) attention to place–action associations and relevant spatial relations contributes to route and survey knowledge, although landmarks and boundaries appear to be learned without effort, (4) route and survey information are differentially encoded in subunits of working memory, and (5) there is preliminary evidence that mental manipulation of such properties facilitates spatial learning. Idiothetic information appears to be necessary to reveal the influence of attention and, possibly, decision making in survey learning, which may explain the mixed results in desktop virtual reality. Thus, there is indeed an active advantage in spatial learning, which manifests itself in the task-dependent acquisition of route and survey knowledge.

191 citations


Journal ArticleDOI
TL;DR: Application of this test reveals evidence of publication bias in two prominent investigations from experimental psychology that have purported to reveal evidence of extrasensory perception and to indicate severe limitations of the scientific method.
Abstract: Empirical replication has long been considered the final arbiter of phenomena in science, but replication is undermined when there is evidence for publication bias. Evidence for publication bias in a set of experiments can be found when the observed number of rejections of the null hypothesis exceeds the expected number of rejections. Application of this test reveals evidence of publication bias in two prominent investigations from experimental psychology that have purported to reveal evidence of extrasensory perception and to indicate severe limitations of the scientific method. The presence of publication bias suggests that those investigations cannot be taken as proper scientific studies of such phenomena, because critical data are not available to the field. Publication bias could partly be avoided if experimental psychologists started using Bayesian data analysis techniques.

168 citations


Journal ArticleDOI
TL;DR: This article shows how an investigation of the effect sizes from reported experiments can test for publication bias by looking for too much successful replication, and demonstrates that using Bayesian methods of data analysis can reduce (and in some cases, eliminate) the occurrence of publication bias.
Abstract: Replication of empirical findings plays a fundamental role in science. Among experimental psychologists, successful replication enhances belief in a finding, while a failure to replicate is often interpreted to mean that one of the experiments is flawed. This view is wrong. Because experimental psychology uses statistics, empirical findings should appear with predictable probabilities. In a misguided effort to demonstrate successful replication of empirical findings and avoid failures to replicate, experimental psychologists sometimes report too many positive results. Rather than strengthen confidence in an effect, too much successful replication actually indicates publication bias, which invalidates entire sets of experimental findings. Researchers cannot judge the validity of a set of biased experiments because the experiment set may consist entirely of type I errors. This article shows how an investigation of the effect sizes from reported experiments can test for publication bias by looking for too much successful replication. Simulated experiments demonstrate that the publication bias test is able to discriminate biased experiment sets from unbiased experiment sets, but it is conservative about reporting bias. The test is then applied to several studies of prominent phenomena that highlight how publication bias contaminates some findings in experimental psychology. Additional simulated experiments demonstrate that using Bayesian methods of data analysis can reduce (and in some cases, eliminate) the occurrence of publication bias. Such methods should be part of a systematic process to remove publication bias from experimental psychology and reinstate the important role of replication as a final arbiter of scientific findings.

163 citations


Journal ArticleDOI
TL;DR: Prior findings concerning specific auditory–gustatory mappings are consolidated, whereby special attention is given to highlighting any conflicts in the existing experimental evidence and any potential caveats with regard to the most appropriate interpretation of prior studies.
Abstract: In this article, the rapidly growing body of research that has been published recently on the topic of crossmodal correspondences that involve auditory and gustatory/flavor stimuli is critically reviewed. The evidence demonstrates that people reliably match different tastes/flavors to auditory stimuli varying in both their psychoacoustic (e.g., pitch) and musical (e.g., timbre) properties. In order to stimulate further progress in this relatively young research field, the present article aims at consolidating prior findings concerning specific auditory–gustatory mappings, whereby special attention is given to highlighting (1) any conflicts in the existing experimental evidence and (2) any potential caveats with regard to the most appropriate interpretation of prior studies. Next, potential mechanisms underlying auditory–gustatory crossmodal correspondences are discussed. Finally, a number of potentially fruitful avenues for future research are outlined.

156 citations


Journal ArticleDOI
TL;DR: This result suggests that to resist capture, a specific target template must be accompanied by experience-dependent attentional tuning to distractor properties.
Abstract: Irrelevant salient distractors often capture attention, but given a sufficiently specific search template, these salient items no longer capture attention. In the present experiments, we investigated whether specific target templates are sufficient to resist capture, or whether experience with the salient distractors is also necessary. To test this hypothesis, observers completed four blocks of trials, each with a different-colored irrelevant singleton present on half of the trials. Color singletons captured attention early within a block, but after sufficient experience with the irrelevant singletons, those singletons no longer captured attention in the second halves of the blocks. This result suggests that to resist capture, a specific target template must be accompanied by experience-dependent attentional tuning to distractor properties.

150 citations


Journal ArticleDOI
TL;DR: It was found that a higher degree of media multitasking was correlated with better multisensory integration, and the fact that heavy media multitaskers are not deficient in all kinds of cognitive tasks suggests thatMedia multitasking does not always hurt.
Abstract: Heavy media multitaskers have been found to perform poorly in certain cognitive tasks involving task switching, selective attention, and working memory. An account for this is that with a breadth-biased style of cognitive control, multitaskers tend to pay attention to various information available in the environment, without sufficient focus on the information most relevant to the task at hand. This cognitive style, however, may not cause a general deficit in all kinds of tasks. We tested the hypothesis that heavy media multitaskers would perform better in a multisensory integration task than would others, due to their extensive experience in integrating information from different modalities. Sixty-three participants filled out a questionnaire about their media usage and completed a visual search task with and without synchronous tones (pip-and-pop paradigm). It was found that a higher degree of media multitasking was correlated with better multisensory integration. The fact that heavy media multitaskers are not deficient in all kinds of cognitive tasks suggests that media multitasking does not always hurt.

128 citations


Journal ArticleDOI
TL;DR: Investigation of eye movement measures of first-language and second-language paragraph reading showed that amount of current L2 exposure is a key determinant of FEs and, thus, lexical activation, in both the L1 and L2.
Abstract: We used eye movement measures of first-language (L1) and second-language (L2) paragraph reading to investigate whether the degree of current L2 exposure modulates the relative size of L1 and L2 frequency effects (FEs). The results showed that bilinguals displayed larger L2 than L1 FEs during both early- and late-stage eye movement measures, which are taken to reflect initial lexical access and postlexical access, respectively. Moreover, the magnitude of L2 FEs was inversely related to current L2 exposure, such that lower levels of L2 exposure led to larger L2 FEs. In contrast, during early-stage reading measures, bilinguals with higher levels of current L2 exposure showed larger L1 FEs than did bilinguals with lower levels of L2 exposure, suggesting that increased L2 experience modifies the earliest stages of L1 lexical access. Taken together, the findings are consistent with implicit learning accounts (e.g., Monsell, 1991), the weaker links hypothesis (Gollan, Montoya, Cera, Sandoval, Journal of Memory and Language, 58:787-814, 2008), and current bilingual visual word recognition models (e.g., the bilingual interactive activation model plus [BIA+]; Dijkstra & van Heuven, Bilingualism: Language and Cognition, 5:175-197, 2002). Thus, amount of current L2 exposure is a key determinant of FEs and, thus, lexical activation, in both the L1 and L2.

Journal ArticleDOI
TL;DR: This work provides a simple, intuitive generalization of the Loftus and Masson method that allows for assessment of the circularity assumption in the repeated measures ANOVA.
Abstract: Repeated measures designs are common in experimental psychology. Because of the correlational structure in these designs, the calculation and interpretation of confidence intervals is nontrivial. One solution was provided by Loftus and Masson (Psychonomic Bulletin & Review 1:476–490, 1994). This solution, although widely adopted, has the limitation of implying same-size confidence intervals for all factor levels, and therefore does not allow for the assessment of variance homogeneity assumptions (i.e., the circularity assumption, which is crucial for the repeated measures ANOVA). This limitation and the method’s perceived complexity have sometimes led scientists to use a simplified variant, based on a per-subject normalization of the data (Bakeman & McArthur, Behavior Research Methods, Instruments, & Computers 28:584–589, 1996; Cousineau, Tutorials in Quantitative Methods for Psychology 1:42–45, 2005; Morey, Tutorials in Quantitative Methods for Psychology 4:61–64, 2008; Morrison & Weaver, Behavior Research Methods, Instruments, & Computers 27:52–56, 1995). We show that this normalization method leads to biased results and is uninformative with regard to circularity. Instead, we provide a simple, intuitive generalization of the Loftus and Masson method that allows for assessment of the circularity assumption.

Journal ArticleDOI
TL;DR: In four experiments, the impact of nonprobative information on truthiness was examined and it was shown that photos and verbal information similarly inflated truthiness, suggesting that the effect is not peculiar to photographs per se.
Abstract: When people evaluate claims, they often rely on what comedian Stephen Colbert calls “truthiness,” or subjective feelings of truth. In four experiments, we examined the impact of nonprobative information on truthiness. In Experiments 1A and 1B, people saw familiar and unfamiliar celebrity names and, for each, quickly responded “true” or “false” to the (between-subjects) claim “This famous person is alive” or “This famous person is dead.” Within subjects, some of the names appeared with a photo of the celebrity engaged in his or her profession, whereas other names appeared alone. For unfamiliar celebrity names, photos increased the likelihood that the subjects would judge the claim to be true. Moreover, the same photos inflated the subjective truth of both the “alive” and “dead” claims, suggesting that photos did not produce an “alive bias” but rather a “truth bias.” Experiment 2 showed that photos and verbal information similarly inflated truthiness, suggesting that the effect is not peculiar to photographs per se. Experiment 3 demonstrated that nonprobative photos can also enhance the truthiness of general knowledge claims (e.g., Giraffes are the only mammals that cannot jump). These effects add to a growing literature on how nonprobative information can inflate subjective feelings of truth.

Journal ArticleDOI
TL;DR: An associative model is presented that accounts for both results using competing familiarity and uncertainty biases and indicates that learners quickly infer new pairs in late training on the basis of their knowledge of pretrained pairs, exhibiting ME.
Abstract: People can learn word–referent pairs over a short series of individually ambiguous situations containing multiple words and referents (Yu & Smith, 2007, Cognition 106: 1558–1568). Cross-situational statistical learning relies on the repeated co-occurrence of words with their intended referents, but simple co-occurrence counts cannot explain the findings. Mutual exclusivity (ME: an assumption of one-to-one mappings) can reduce ambiguity by leveraging prior experience to restrict the number of word–referent pairings considered but can also block learning of non-one-to-one mappings. The present study first trained learners on one-to-one mappings with varying numbers of repetitions. In late training, a new set of word–referent pairs were introduced alongside pretrained pairs; each pretrained pair consistently appeared with a new pair. Results indicate that (1) learners quickly infer new pairs in late training on the basis of their knowledge of pretrained pairs, exhibiting ME; and (2) learners also adaptively relax the ME bias and learn two-to-two mappings involving both pretrained and new words and objects. We present an associative model that accounts for both results using competing familiarity and uncertainty biases.

Journal ArticleDOI
TL;DR: For theoretical, practical, and empirical reasons, having half of the trials be congruent in a four-alternative task aimed at providing unambiguous evidence of sequential modulation should be avoided.
Abstract: Sequential modulation is the finding that the sizes of several selective-attention phenomena--namely, the Simon, flanker, and Stroop effects--are larger following congruent trials than following incongruent trials. In order to rule out relatively uninteresting explanations of sequential modulation that are based on a variety of stimulus- and response-repetition confounds, a four-alternative forced choice task must be used, such that all trials with any kind of repetition can be omitted from the analysis. When a four-alternative task is used, the question arises as to whether to have the proportions of congruent and incongruent trials be set by chance (and, therefore, be 25% congruent and 75% incongruent) or to raise the proportion of congruent trials to 50%, so that it matches the proportion of incongruent trials. In this observation, it is argued that raising the proportion of congruent trials to 50% should not be done. For theoretical, practical, and empirical reasons, having half of the trials be congruent in a four-alternative task aimed at providing unambiguous evidence of sequential modulation should be avoided.

Journal ArticleDOI
TL;DR: Any task requires construction of a mental control program that aids in segregating and assembling multiple task parts and their controlling rules, and fluid intelligence is linked closely to the efficiency of constructing such programs, especially when behavior is complex and novel.
Abstract: Many varieties of working memory have been linked to fluid intelligence. In Duncan et al. (Journal of Experimental Psychology:General 137:131–148, 2008), we described limited working memory for new task rules: When rules are complex, some may fail in their control of behavior, though they are often still available for explicit recall. Unlike other kinds of working memory, load is determined in this case not by real-time performance demands, but by the total complexity of the task instructions. Here, we show that the correlation with fluid intelligence is stronger for this aspect of working memory than for several other, more traditional varieties—including simple and complex spans and a test of visual short-term memory. Any task, we propose, requires construction of a mental control program that aids in segregating and assembling multiple task parts and their controlling rules. Fluid intelligence is linked closely to the efficiency of constructing such programs, especially when behavior is complex and novel.

Journal ArticleDOI
TL;DR: This novel result suggests that visuospatial training not only can impact performance on measures of spatial functioning, but also can affect performance in content areas in which these abilities are utilized.
Abstract: Although previous research has demonstrated that performance on visuospatial assessments can be enhanced through relevant experience, an unaddressed question is whether such experience also produces a similar increase in target domains (such as science learning) where visuospatial abilities are directly relevant for performance. In the present study, participants completed either spatial or nonspatial training via interaction with video games and were then asked to read and learn about the geologic topic of plate tectonics. Results replicate the benefit of playing appropriate video games in enhancing visuospatial performance and demonstrate that this facilitation also manifests itself in learning science topics that are visuospatial in nature. This novel result suggests that visuospatial training not only can impact performance on measures of spatial functioning, but also can affect performance in content areas in which these abilities are utilized.

Journal ArticleDOI
TL;DR: A model including sensory and decisional parameters that places these tasks in a common framework that allows studying their implications on observed performance is presented, and the fit is satisfactory, although model parameters are more accurately estimated with SJ tasks.
Abstract: Research on the perception of temporal order uses either temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks, in both of which two stimuli are presented with some temporal delay and observers must judge the order of presentation. Results generally differ across tasks, raising concerns about whether they measure the same processes. We present a model including sensory and decisional parameters that places these tasks in a common framework that allows studying their implications on observed performance. TOJ tasks imply specific decisional components that explain the discrepancy of results obtained with TOJ and SJ tasks. The model is also tested against published data on audiovisual temporal-order judgments, and the fit is satisfactory, although model parameters are more accurately estimated with SJ tasks. Measures of latent point of subjective simultaneity and latent sensitivity are defined that are invariant across tasks by isolating the sensory parameters governing observed performance, whereas decisional parameters vary across tasks and account for observed differences across them. Our analyses concur with other evidence advising against the use of TOJ tasks in research on perception of temporal order.

Journal ArticleDOI
TL;DR: The results document that attentional capture by WM contents is partly, but not fully, malleable by top-down control, which appears to adjust the state of the WM contents to optimize search behavior.
Abstract: Across many studies, researchers have found that representations in working memory (WM) can guide visual attention toward items that match the features of the WM contents. While some researchers have contended that this occurs involuntarily, others have suggested that the impact of WM contents on attention can be strategically controlled. Here, we varied the probability that WM items would coincide with either targets or distractors in a visual search task to examine (1) whether participants could intentionally enhance or inhibit the influence of WM items on attention and (2) whether cognitive control over WM biases would also affect access to the memory contents in a surprise recognition test. We found visual search to be faster when the WM item coincided with the search target, and this effect was enhanced when the memory item reliably predicted the location of the target. Conversely, visual search was slowed when the memory item coincided with a search distractor, and this effect was diminished, but not abolished, when the memory item was reliably associated with distractors. This strategic dampening of the influence of WM items on attention came at a price to memory, however, as participants were slowest to perform WM recognition tests on blocks in which the WM contents were consistently invalid. These results document that attentional capture by WM contents is partly, but not fully, malleable by top-down control, which appears to adjust the state of the WM contents to optimize search behavior. These data illustrate the role of cognitive control in modulating the strength of WM biases of selection, and they support a tight coupling between WM and attention.

Journal ArticleDOI
TL;DR: It is posited that this bias may arise due to principles of object perception, and it is shown how it has downstream implications for decision making.
Abstract: Perhaps the most common method of depicting data, in both scientific communication and popular media, is the bar graph. Bar graphs often depict measures of central tendency, but they do so asymmetrically: A mean, for example, is depicted not by a point, but by the edge of a bar that originates from a single axis. Here we show that this graphical asymmetry gives rise to a corresponding cognitive asymmetry. When viewers are shown a bar depicting a mean value and are then asked to judge the likelihood of a particular data point being part of its underlying distribution, viewers judge points that fall within the bar as being more likely than points equidistant from the mean, but outside the bar--as if the bar somehow "contained" the relevant data. This "within-the-bar bias" occurred (a) for graphs with and without error bars, (b) for bars that originated from both lower and upper axes, (c) for test points with equally extreme numeric labels, (d) both from memory (when the bar was no longer visible) and in online perception (while the bar was visible during the judgment), (e) both within and between subjects, and (f) in populations including college students, adults from the broader community, and online samples. We posit that this bias may arise due to principles of object perception, and we show how it has downstream implications for decision making.

Journal ArticleDOI
TL;DR: It is argued that the general approach of using psychological theory to guide the specification of informative prior distributions is widely applicable and should be routinely used in psychological modeling.
Abstract: Formal models in psychology are used to make theoretical ideas precise and allow them to be evaluated quantitatively against data. We focus on one important--but under-used and incorrectly maligned--method for building theoretical assumptions into formal models, offered by the Bayesian statistical approach. This method involves capturing theoretical assumptions about the psychological variables in models by placing informative prior distributions on the parameters representing those variables. We demonstrate this approach of casting basic theoretical assumptions in an informative prior by considering a case study that involves the generalized context model (GCM) of category learning. We capture existing theorizing about the optimal allocation of attention in an informative prior distribution to yield a model that is higher in psychological content and lower in complexity than the standard implementation. We also highlight that formalizing psychological theory within an informative prior distribution allows standard Bayesian model selection methods to be applied without concerns about the sensitivity of results to the prior. We then use Bayesian model selection to test the theoretical assumptions about optimal allocation formalized in the prior. We argue that the general approach of using psychological theory to guide the specification of informative prior distributions is widely applicable and should be routinely used in psychological modeling.

Journal ArticleDOI
TL;DR: The results suggest that bilingual advantages for word learning may be rooted, at least in part, in bilinguals’ greater sensitivity to semantic information during learning.
Abstract: Previous studies have demonstrated that bilingualism can facilitate novel-word learning. However, the mechanisms behind this bilingual advantage remain unknown. Here, we examined whether bilinguals may be more sensitive to semantic information associated with the novel words. To that end, we manipulated the concreteness of the referent in the word-learning paradigm, since concrete words have been shown to activate the semantic system more robustly than abstract words do. The results revealed that the bilingual advantage was stronger for novel words learned in association with concrete rather than abstract referents. These findings suggest that bilingual advantages for word learning may be rooted, at least in part, in bilinguals’ greater sensitivity to semantic information during learning.

Journal ArticleDOI
TL;DR: Five classes of well-established latency mechanisms are described and analyzed that are consistent with nDPs—exhaustive processing models, correlated stage models, mixture models, cascade models, and parallel channels models—and the implications of these analyses for the interpretation of DPs are discussed.
Abstract: Delta plots (DPs) graphically compare reaction time (RT) quantiles obtained under two experimental conditions. In some research areas (e.g., Simon effects), decreasing delta plots (nDPs) have consistently been found, indicating that the experimental effect is largest at low quantiles and decreases for higher quantiles. nDPs are unusual and intriguing: They imply that RT in the faster condition is more variable, a pattern predicted by few standard RT models. We describe and analyze five classes of well-established latency mechanisms that are consistent with nDPs—exhaustive processing models, correlated stage models, mixture models, cascade models, and parallel channels models—and discuss the implications of our analyses for the interpretation of DPs. DPs generally do not imply any specific processing model; therefore, it is more fruitful to start from a specific quantitative model and to compare the DP it predicts with empirical data.

Journal ArticleDOI
TL;DR: Fits of the diffusion model showed that criteria differences persisted in the fixed-time condition, suggesting that age differences are not solely based on differences in task goals.
Abstract: In two-choice decision tasks, Starns and Ratcliff (Psychology and Aging 25: 377–390, 2010) showed that older adults are farther from the optimal speed–accuracy trade-off than young adults. They suggested that the age effect resulted from differences in task goals, with young participants focused on balancing speed and accuracy and older participants focused on minimizing errors. We compared speed–accuracy criteria with a standard procedure (blocks that had a fixed numbers of trials) to a condition in which blocks lasted a fixed amount of time and participants were instructed to get as many correct responses as possible within the time limit—a goal that explicitly required balancing speed and accuracy. Fits of the diffusion model showed that criteria differences persisted in the fixed-time condition, suggesting that age differences are not solely based on differences in task goals. Also, both groups produced more conservative criteria in difficult conditions when it would have been optimal to be more liberal.

Journal ArticleDOI
TL;DR: The results indicate that high memory ability increases habituation rate, and they support theories proposing a role for cognitive control in habituation and in some forms of auditory distraction.
Abstract: Habituation of the orienting response is a pivotal part of selective attention, and previous research has related working memory capacity (WMC) to attention control Against this background, the purpose of this study was to investigate whether individual differences in WMC contribute to habituation rate The participants categorized visual targets across six blocks of trials Each target was preceded either by a standard sound or, on rare trials, by a deviant The magnitude of the deviation effect (ie, prolonged response time when the deviant was presented) was relatively large in the beginning but attenuated toward the end There was no relationship between WMC and the deviation effect at the beginning, but there was at the end, and greater WMC was associated with greater habituation These results indicate that high memory ability increases habituation rate, and they support theories proposing a role for cognitive control in habituation and in some forms of auditory distraction

Journal ArticleDOI
TL;DR: In two studies with large samples, the relationship between multiple working memory measures and inattentional blindness was tested and individual differences in working memory predicted the ability to perform an attention-demanding tracking task, but did not predict the likelihood of noticing an unexpected object present during the task.
Abstract: Individual differences in working memory predict many aspects of cognitive performance, especially for tasks that demand focused attention One negative consequence of focused attention is inattentional blindness, the failure to notice unexpected objects when attention is engaged elsewhere Yet, the relationship between individual differences in working memory and inattentional blindness is unclear; some studies have found that higher working memory capacity is associated with greater noticing, but others have found no direct association Given the theoretical and practical significance of such individual differences, more definitive tests are needed In two studies with large samples, we tested the relationship between multiple working memory measures and inattentional blindness Individual differences in working memory predicted the ability to perform an attention-demanding tracking task, but did not predict the likelihood of noticing an unexpected object present during the task We discuss the reasons why we might not expect such individual differences in noticing and why other studies may have found them

Journal ArticleDOI
TL;DR: The results suggest that privileged knowledge does shape language use, but crucially, that the degree to which the addressee’s perspective is considered is shaped by the relevance of theAddressee's perspective to the utterance goals.
Abstract: We examined the extent to which speakers take into consideration the addressee’s perspective in language production. Previous research on this process had revealed clear deficits (Horton & Keysar, Cognition 59:91–117, 1996; Wardlow Lane & Ferreira, Journal of Experimental Psychology: Learning, Memory, and Cognition 34:1466–1481, 2008). Here, we evaluated a new hypothesis—that the relevance of the addressee’s perspective depends on the speaker’s goals. In two experiments, Korean speakers described a target object in situations in which the perspective status of a competitor object (e.g., a large plate when describing a smaller plate) was manipulated. In Experiment 1, we examined whether speakers would use scalar-modified expressions even when the competitor was hidden from the addressee. The results demonstrated that information from both the speaker’s and the addressee’s perspectives influenced production. In Experiment 2, we examined whether utterance goals modulate this process. The results indicated that when a speaker makes a request, the addressee’s perspective has a stronger influence than it does when the speaker informs the addressee. These results suggest that privileged knowledge does shape language use, but crucially, that the degree to which the addressee’s perspective is considered is shaped by the relevance of the addressee’s perspective to the utterance goals.

Journal ArticleDOI
TL;DR: On a final test assessing their JRD performance, the participants who learned through test outperformed those who learnedthrough study and when the final test assessed memory from vantage points that had never been practiced during the initial JRD.
Abstract: Many studies have reported that tests are beneficial for learning (e.g., Roediger & Karpicke, 2006a). However, the majority of studies on the testing effect have been limited to a combination of relatively simple verbal tasks and final tests that assessed memory for the same material that had originally been tested. The present study explored whether testing is beneficial for complex spatial memory and whether these benefits hold for both retention and transfer. After encoding a three-dimensional layout of objects presented in a virtual environment, participants completed a judgment-of-relative-direction (JRD) task in which they imagined standing at one object, facing a second object, and pointed to a third object from the imagined perspective. Some participants completed this task by relying on memory for the previously encoded layout (i.e., the test conditions), whereas for others the location of the third object was identified ahead of time, so that retrieval was not required (i.e., the study condition). On a final test assessing their JRD performance, the participants who learned through test outperformed those who learned through study. This was true even when corrective feedback was not provided on the initial JRD task and when the final test assessed memory from vantage points that had never been practiced during the initial JRD.

Journal ArticleDOI
TL;DR: The study suggests that the reliability of the latency–confidence association in problem solving depends on the strength of the inverse relationship between latency and accuracy in the particular task.
Abstract: Confidence in answers is known to be sensitive to the fluency with which answers come to mind. One aspect of fluency is response latency. Latency is often a valid cue for accuracy, showing an inverse relationship with both accuracy rates and confidence. The present study examined the independent latency–confidence association in problem-solving tasks. The tasks were ecologically valid situations in which latency showed no validity, moderate validity, and high validity as a predictor of accuracy. In Experiment 1, misleading problems, which often elicit initial wrong solutions, were answered in open-ended and multiple-choice test formats. Under the open-ended test format, latency was absolutely not valid in predicting accuracy: Quickly and slowly provided solutions had a similar chance of being correct. Under the multiple-choice test format, latency predicted accuracy better. In Experiment 2, nonmisleading problems were used; here, latency was highly valid in predicting accuracy. A breakdown into correct and incorrect solutions allowed examination of the independent latency–confidence relationship when latency necessarily had no validity in predicting accuracy. In all conditions, regardless of latency’s validity in predicting accuracy, confidence was persistently sensitive to latency: The participants were more confident in solutions provided quickly than in those that involved lengthy thinking. The study suggests that the reliability of the latency–confidence association in problem solving depends on the strength of the inverse relationship between latency and accuracy in the particular task.

Journal ArticleDOI
TL;DR: Judges made under instructions to respond intuitively were influenced by the base rates and took the same length of time in the two conditions, suggesting that the use of base rates is routine and effortless and that base rate “neglect” is really a mixture of two strategies.
Abstract: We tested models of base rate “neglect” using a novel paradigm. Participants (N = 62) judged the probability that a hypothetical person belonged to one of two categories (e.g., nurse/doctor) on the basis of either a personality description alone (NoBR) or the personality description and a base rate probability (BR). When base rates and descriptions were congruent, judgments in the BR condition were higher and more uniform than those in the NoBR condition. In contrast, base rates had a polarizing effect on judgments when they were incongruent with the descriptions, such that estimates were either consistent with the base rates or discrepant with them. These data suggest that the form of base rate use (i.e., whether base rates will be integrated with diagnostic information) is context dependent. In addition, judgments made under instructions to respond intuitively were influenced by the base rates and took the same length of time in the two conditions. These data suggest that the use of base rates is routine and effortless and that base rate “neglect” is really a mixture of two strategies, one that is informed primarily by the base rate and the other by the personality description.