scispace - formally typeset
Search or ask a question
Author

Albert-Georg Lang

Bio: Albert-Georg Lang is an academic researcher from University of Düsseldorf. The author has contributed to research in topics: Reference tone & Sound localization. The author has an hindex of 5, co-authored 7 publications receiving 14990 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In the new version, procedures to analyze the power of tests based on single-sample tetrachoric correlations, comparisons of dependent correlations, bivariate linear regression, multiple linear regression based on the random predictor model, logistic regression, and Poisson regression are added.
Abstract: G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

20,778 citations

Journal ArticleDOI
TL;DR: This work shows that disruption by steady-state distractors is significantly reduced after preexposure to the distractor item, directly confirming a central assumption of attentional explanations of auditory distraction and parallel to what has been shown earlier for changing-state sounds.
Abstract: Sound disrupts short-term retention in working memory even when the sound is completely irrelevant and has to be ignored. The dominant view in the literature is that this type of disruption is essentially limited to so-called changing-state distractor sequences with acoustic changes between successive distractor objects (e.g., "ABABABAB") and does not occur with so-called steady-state distractor sequences that are composed of a single repeated distractor object (e.g., "AAAAAAAA"). Here we show that this view can no longer be maintained. What is more, disruption by steady-state distractors is significantly reduced after preexposure to the distractor item, directly confirming a central assumption of attentional explanations of auditory distraction and parallel to what has been shown earlier for changing-state sounds. Taken together, the findings reported here are compatible with a graded attentional account of auditory disruption, and they are incompatible with the duplex-mechanism account. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

23 citations

Journal ArticleDOI
TL;DR: Evidence is provided that the different trading ratios may be an effect of a shift of attention toward the to-be-adjusted cue, and a mechanism is proposed as a mechanism that could account for this finding.
Abstract: When setting interaural time differences and interaural intensity differences into opposition the measured trading ratio depends on which of the cues is adjusted by the listener. This paper provides some evidence that the different trading ratios may be an effect of a shift of attention toward the to-be-adjusted cue. The experiments consisted of two phases. In the compensation phase, participants canceled out the effect of one preset binaural cue by adjusting a compensatory value of the other cue until the sound was located in the center. In the localization phase participants assessed the virtual location of the sounds, again using the preset values of the fixed cue, but using the values of the other cue as previously adjusted. The sounds were no longer perceived as originating from the center. Instead, their perceived location was shifted back toward the location from which they appeared to originate before the adjustment. These findings suggest that during the compensation task the to-be-adjusted sound localization cue received an increased weight compared to the other cue. We propose shifts of attention between the cues as a mechanism that could account for this finding.

19 citations

Journal ArticleDOI
TL;DR: With sufficient statistical power it can be shown that disruption increases not only when the distractor token set size increases from 1 to 2, but also when it increases from two to eight one-syllable words and brief instrumental sounds.
Abstract: Sequences of auditory objects such as one-syllable words or brief sounds disrupt serial recall of visually presented targets even when the auditory objects are completely irrelevant for the task at hand. The token set size effect is a label for the claim that disruption increases only when moving from a 1-token distractor sequence (e.g., "AAAAAAAA") to a token set size of 2 (e.g., "ABABABAB") but remains constant when moving from a token set size of 2 to a larger token set size (e.g., "ABCABCAB" or "DAGCFBEH"). Here we show that this claim was incorrect and based on experiments with insufficient statistical power. With sufficient statistical power it can be shown that disruption increases not only when the distractor token set size increases from 1 to 2, but also when it increases from two to eight one-syllable words (Experiment 1) and brief instrumental sounds (Experiment 2). These findings have implications for theories of auditory distraction which differ in their predictions about whether the distractor-induced performance decrement should (a) only be determined by acoustic differences between immediately adjacent distractor tokens (duplex-mechanism account) or (b) gradually increase as a function of the variability in the distractor set (attentional account). The present data are inconsistent with the duplex-mechanism account and support the attentional account. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

16 citations

Journal ArticleDOI
TL;DR: The use of broadband stimuli raises the question whether the "shift-back effect" was caused by attentional shifts to the effect of the to-be-adjusted binaural cue or by attention shifts toThe particular frequency range which is most important for localizations based on the to be-adjusted cue.
Abstract: When interaural time differences and interaural intensity differences are set into opposition, the measured trading ratio depends on which cue is adjusted by the listener. In an earlier article [Lang, A.-G., and Buchner, A., J. Acoust. Soc. Am. 124, 3120–3131 (2008)], four experiments showed that the perceived localization of a broad band sound for which differences in one cue were compensated by differences in the other cue such that the sound seemed to originate from a central position shifted back toward the location from which the sound appeared to originate before the adjustment. It was argued that attention shifted toward the effect of the to-be-adjusted cue during the compensation task, leading to an increased weighting of the to-be-adjusted cue. The use of broadband stimuli raises the question whether the “shift-back effect” was caused by attentional shifts to the effect of the to-be-adjusted binaural cue or by attention shifts to the particular frequency range which is most important for localiza...

12 citations


Cited by
More filters
Book
01 Jun 2015
TL;DR: A practical primer on how to calculate and report effect sizes for t-tests and ANOVA's such that effect sizes can be used in a-priori power analyses and meta-analyses and a detailed overview of the similarities and differences between within- and between-subjects designs is provided.
Abstract: Effect sizes are the most important outcome of empirical studies. Most articles on effect sizes highlight their importance to communicate the practical significance of results. For scientists themselves, effect sizes are most useful because they facilitate cumulative science. Effect sizes can be used to determine the sample size for follow-up studies, or examining effects across studies. This article aims to provide a practical primer on how to calculate and report effect sizes for t-tests and ANOVA’s such that effect sizes can be used in a-priori power analyses and meta-analyses. Whereas many articles about effect sizes focus on between-subjects designs and address within-subjects designs only briefly, I provide a detailed overview of the similarities and differences between within- and between-subjects designs. I suggest that some research questions in experimental psychology examine inherently intra-individual effects, which makes effect sizes that incorporate the correlation between measures the best summary of the results. Finally, a supplementary spreadsheet is provided to make it as easy as possible for researchers to incorporate effect size calculations into their workflow.

5,374 citations

Journal ArticleDOI
TL;DR: It is argued Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches, and provides a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive.
Abstract: No scientific conclusion follows automatically from a statistically non-significant result, yet people routinely use non-significant results to guide conclusions about the status of theories (or the effectiveness of practices). To know whether a non-significant result counts against a theory, or if it just indicates data insensitivity, researchers must use one of: power, intervals (such as confidence or credibility intervals), or else an indicator of the relative evidence for one theory over another, such as a Bayes factor. I argue Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches. Specifically, Bayes factors use the data themselves to determine their sensitivity in distinguishing theories (unlike power), and they make use of those aspects of a theory’s predictions that are often easiest to specify (unlike power and intervals, which require specifying the minimal interesting value in order to address theory). Bayes factors provide a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive. They allow accepting and rejecting the null hypothesis to be put on an equal footing. Concrete examples are provided to indicate the range of application of a simple online Bayes calculator, which reveal both the strengths and weaknesses of Bayes factors.

1,496 citations

Journal ArticleDOI
TL;DR: The R package (rmcorr) is introduced and its use for inferential statistics and visualization with two example datasets are used to illustrate research questions at different levels of analysis, intra-individual, and inter-individual.
Abstract: Repeated measures correlation (rmcorr) is a statistical technique for determining the common within-individual association for paired measures assessed on two or more occasions for multiple individuals. Simple regression/correlation is often applied to non-independent observations or aggregated data; this may produce biased, specious results due to violation of independence and/or differing patterns between-participants versus within-participants. Unlike simple regression/correlation, rmcorr does not violate the assumption of independence of observations. Also, rmcorr tends to have much greater statistical power because neither averaging nor aggregation is necessary for an intra-individual research question. Rmcorr estimates the common regression slope, the association shared among individuals. To make rmcorr accessible, we provide background information for its assumptions and equations, visualization, power, and tradeoffs with rmcorr compared to multilevel modeling. We introduce the R package (rmcorr) and demonstrate its use for inferential statistics and visualization with two example datasets. The examples are used to illustrate research questions at different levels of analysis, intra-individual, and inter-individual. Rmcorr is well-suited for research questions regarding the common linear association in paired repeated measures data. All results are fully reproducible.

1,135 citations

Journal ArticleDOI
TL;DR: This paper will provide psychophysiological researchers with recommendations and practical advice concerning experimental designs, data analysis, and data reporting to ensure that researchers starting a project with HRV and cardiac vagal tone are well informed regarding methodological considerations in order for their findings to contribute to knowledge advancement in their field.
Abstract: Psychophysiological research integrating heart rate variability (HRV) has increased during the last two decades, particularly given the fact that HRV is able to index cardiac vagal tone. Vagal tone, which represents the activity of the parasympathetic system, is acknowledged to be linked with many phenomena relevant for psychophysiological research, including self-regulation at the cognitive, emotional, social, and health levels. The ease of HRV collection and measurement coupled with the fact it is relatively affordable, non-invasive and pain free makes it widely accessible to many researchers. This ease of access should not obscure the difficulty of interpretation of HRV findings that can be easily misconstrued, however this can be controlled to some extent through correct methodological processes. Standards of measurement were developed two decades ago by a Task Force within HRV research, and recent reviews updated several aspects of the Task Force paper. However, many methodological aspects related to HRV in psychophysiological research have to be considered if one aims to be able to draw sound conclusions, which makes it difficult to interpret findings and to compare results across laboratories. Those methodological issues have mainly been discussed in separate outlets, making difficult to get a grasp on them, and thus this paper aims to address this issue. It will help to provide psychophysiological researchers with recommendations and practical advice concerning experimental designs, data analysis, and data reporting. This will ensure that researchers starting a project with HRV and cardiac vagal tone are well informed regarding methodological considerations in order for their findings to contribute to knowledge advancement in their field.

1,096 citations

Journal ArticleDOI
TL;DR: The very reason such tasks produce robust and easily replicable experimental effects – low between-participant variability – makes their use as correlational tools problematic, and it is demonstrated that taking reliability estimates into account has the potential to qualitatively change theoretical conclusions.
Abstract: Individual differences in cognitive paradigms are increasingly employed to relate cognition to brain structure, chemistry, and function. However, such efforts are often unfruitful, even with the most well established tasks. Here we offer an explanation for failures in the application of robust cognitive paradigms to the study of individual differences. Experimental effects become well established – and thus those tasks become popular – when between-subject variability is low. However, low between-subject variability causes low reliability for individual differences, destroying replicable correlations with other factors and potentially undermining published conclusions drawn from correlational relationships. Though these statistical issues have a long history in psychology, they are widely overlooked in cognitive psychology and neuroscience today. In three studies, we assessed test-retest reliability of seven classic tasks: Eriksen Flanker, Stroop, stop-signal, go/no-go, Posner cueing, Navon, and Spatial-Numerical Association of Response Code (SNARC). Reliabilities ranged from 0 to .82, being surprisingly low for most tasks given their common use. As we predicted, this emerged from low variance between individuals rather than high measurement variance. In other words, the very reason such tasks produce robust and easily replicable experimental effects – low between-participant variability – makes their use as correlational tools problematic. We demonstrate that taking such reliability estimates into account has the potential to qualitatively change theoretical conclusions. The implications of our findings are that well-established approaches in experimental psychology and neuropsychology may not directly translate to the study of individual differences in brain structure, chemistry, and function, and alternative metrics may be required.

869 citations