scispace - formally typeset
Search or ask a question

Showing papers in "Music Perception in 2021"


Journal ArticleDOI
TL;DR: The results are consistent with the VSH; however, they suggest that a composite model, incorporating both harmonicity as well as spectral interference as predictors, would best account for variance in consonance judgments.
Abstract: Recently, Bowling, Purves, and Gill (2018a), found that individuals perceive chords with spectra resembling a harmonic series as more consonant. This is consistent with their vocal similarity hypothesis (VSH), the notion that the experience of consonance is based on an evolved preference for sounds that resemble human vocalizations. To rule out confounding between harmonicity and familiarity, we extended Bowling et al.’s (2018a) procedure to chords from the unconventional Bohlen-Pierce chromatic just (BPCJ) scale. We also assessed whether the association between harmonicity and consonance was moderated by timbre by presenting chords generated from either piano or clarinet samples. Results failed to straightforwardly replicate this association; however, evidence of a positive correlation between harmonicity and consonance did emerge across timbres following post hoc exclusion of chords containing intervals that were particularly similar to conventional equal-tempered dyads. Supplementary regression analyses using a more comprehensive measure of harmonicity confirmed its positive association with consonance ratings of BPCJ chords, yet also showed that spectral interference independently contributed to these ratings. In sum, our results are consistent with the VSH; however, they also suggest that a composite model, incorporating both harmonicity as well as spectral interference as predictors, would best account for variance in consonance judgments.

11 citations


Journal ArticleDOI
TL;DR: The authors compared the retrieval characteristics, content, and emotions of MEAMs to television-evoked autobiographical memories (TEAMs) in an online sample of 657 participants who were representative of the British adult population on age, gender, income, and education.
Abstract: Music can be a potent cue for autobiographical memories in both everyday and clinical settings. Understanding the extent to which music may have privileged access to aspects of our personal histories requires critical comparisons to other types of memories and exploration of how music-evoked autobiographical memories (MEAMs) vary across individuals. We compared the retrieval characteristics, content, and emotions of MEAMs to television-evoked autobiographical memories (TEAMs) in an online sample of 657 participants who were representative of the British adult population on age, gender, income, and education. Each participant reported details of a recent MEAM and a recent TEAM experience. MEAMs exhibited significantly greater episodic reliving, personal significance, and social content than TEAMs, and elicited more positive and intense emotions. The majority of these differences between MEAMs and TEAMs persisted in an analysis of a subset of responses in which the music and television cues were matched on familiarity. Age and gender effects were smaller, and consistent across both MEAMs and TEAMs. These results indicate phenomenological differences in naturally occurring memories cued by music as compared to television that are maintained across adulthood. Findings are discussed in the context of theoretical accounts of autobiographical memory, functions of music, and healthy aging.

11 citations


Journal ArticleDOI
TL;DR: The authors investigated factors contributing to listeners' narrative engagement with music, comparing the narrative experiences of Western and Chinese instrumental music for listeners in two suburban locations in the United States with those of listeners living in a remote rural village in China with different patterns of musical exposure.
Abstract: Although people across multiple cultures have been shown to experience music narratively, it has proven difficult to disentangle whether narrative dimensions of music derive from learned extramusical associations within a culture or from less experience-dependent elements of the music, such as musical contrast. Toward this end, two experiments investigated factors contributing to listeners’ narrative engagement with music, comparing the narrative experiences of Western and Chinese instrumental music for listeners in two suburban locations in the United States with those of listeners living in a remote rural village in China with different patterns of musical exposure. Supporting an enculturation perspective where learned extramusical associations (i.e., Topicality) play an important role in narrative perceptions of music, results from the first experiment show that for Western listeners, greater Topicality, rather than greater Contrast, increases narrative engagement, as long as listeners have sufficient exposure to its patterns of use within a culture. Strengthening this interpretation, results for the second experiment, which directly manipulated Topicality and Contrast, show that reducing an excerpt’s Topicality, but not its Contrast reduces listeners’ narrative engagement.

7 citations


Journal ArticleDOI
TL;DR: In this article, the Experience of Groove Questionnaire (EGQ) was translated from English to German and a validation with a German sample (N = 455) was conducted.
Abstract: In recent empirical research, the experience of groove (i.e., the pleasant sense of wanting to move along with the music) has come into focus. By developing the new Experience of Groove Questionnaire (EGQ), Senn et al. (2020) have provided a standardized and validated research instrument for future studies, consisting of the two correlated factors Urge to Move and Pleasure. The present study reports the translation of the English version into German and a validation with a German sample (N = 455). The original version’s factor structure was confirmed by the German data. Test-retest reliability was found to be high (rtt > .85) for both factors. To determine convergent validity, two other scales were included: The Drum Pattern Quality Scale (Frühauf, Kopiez, & Platz, 2013) and the Aesthetic Emotions Scale (Schindler et al., 2017) showed high correlations (.78 < r < .87) with the two factors of the EGQ and therefore indicated convergent validity. We conclude that the German version shows good psychometric properties and recommend its use for future research on the experience of groove.

7 citations


Journal ArticleDOI
TL;DR: In this paper, the authors designed two speeded classification experiments to investigate whether timbres commonly perceived as "bright-dark" facilitate or interfere with visual perception (darkness-brightness), where participants were presented consecutive images of slightly varying (or the same) brightness along with task-irrelevant auditory primes (bright or dark tones).
Abstract: Musical timbre is often described using terms from non-auditory senses, mainly vision and touch; but it is not clear whether crossmodality in timbre semantics reflects multisensory processing or simply linguistic convention. If multisensory processing is involved in timbre perception, the mechanism governing the interaction remains unknown. To investigate whether timbres commonly perceived as “bright-dark” facilitate or interfere with visual perception (darkness-brightness), we designed two speeded classification experiments. Participants were presented consecutive images of slightly varying (or the same) brightness along with task-irrelevant auditory primes (“bright” or “dark” tones) and asked to quickly identify whether the second image was brighter/darker than the first. Incongruent prime-stimulus combinations produced significantly more response errors compared to congruent combinations but choice reaction time was unaffected. Furthermore, responses in a deceptive identical-image condition indicated subtle semantically congruent response bias. Additionally, in Experiment 2 (which also incorporated a spatial texture task), measures of reaction time (RT) and accuracy were used to construct speed-accuracy tradeoff functions (SATFs) in order to critically compare two hypothesized mechanisms for timbre-based crossmodal interactions, sensory response change vs. shift in response criterion. Results of the SATF analysis are largely consistent with the response criterion hypothesis, although without conclusively ruling out sensory change.

6 citations


Journal ArticleDOI
TL;DR: This paper investigated the influence of observers' musical expertise and instrumental motor expertise on their affective and cognitive responses to complex and unfamiliar classical piano performances of works by Scriabin and Hanson presented in audio and audio-visual formats.
Abstract: EFFECTIVE AUDIENCE ENGAGEMENT WITH MUSICAL performance involves social, cognitive and affective elements. We investigate the influence of observers' musical expertise and instrumental motor expertise on their affective and cognitive responses to complex and unfamiliar classical piano performances of works by Scriabin and Hanson presented in audio and audio-visual formats. Observers gave their felt affect (arousal and valence) and their action understanding responses continuously while observing the performances. Liking and familiarity were rated after each excerpt. As hypothesized: Visual information enhanced observers' action understanding and liking ratings; observers with music training rated their action understanding, liking and familiarity higher than did nonmusicians; observers' felt affect did not vary according to their musical or motor expertise. Contrary to our hypotheses: Visual information had only a slight effect on observers' arousal felt affect responses and none on valence; musicians' specific instrumental motor expertise did not influence action understanding responses.We also observed a significant negative relationship between action understanding and felt affect responses. Ideas of empathy in musical interactions motivated the research; the empathy framework in relation to musical performance is discussed. Nonmusician audiences might be sensitized to challenging musical performances through multimodal strategies to build the performer-observer connection and increase understanding of performance. .

5 citations


Journal ArticleDOI
TL;DR: The authors investigated the role of prosodic re-evaluation of phrasal prosody in the perception of the speech-to-song illusion in high-sonority sentences and in listeners' non-native languages.
Abstract: Listeners usually have no difficulties telling the difference between speech and song. Yet when a spoken phrase is repeated several times, they often report a perceptual transformation that turns speech into song. There is a great deal of variability in the perception of the speech-to-song illusion (STS). It may result partly from linguistic properties of spoken phrases and be partly due to the individual processing difference of listeners exposed to STS. To date, existing evidence is insufficient to predict who is most likely to experience the transformation, and which sentences may be more conducive to the transformation once spoken repeatedly. The present study investigates these questions with French and English listeners, testing the hypothesis that the transformation is achieved by means of functional re-evaluation of phrasal prosody during repetition. Such prosodic re-analysis places demands on the phonological structure of sentences and language proficiency of listeners. Two experiments show that STS is facilitated in high-sonority sentences and in listeners’ non-native languages and support the hypothesis that STS involves a switch between musical and linguistic perception modes.

4 citations


Journal ArticleDOI
TL;DR: In this paper, a dual-task paradigm was employed, in which participants undertook a phonological task once while hearing music, and then again in silence following its presentation, and they predicted that the music would be maintained in working memory, interfering with the task.
Abstract: MUSIC THAT GETS "STUCK" IN THE HEAD IS COMMONLY conceptualized as an intrusive "thought"; however, we argue that this experience is better characterized as automatic mental singing without an accompanying sense of agency. In two experiments, a dual-task paradigm was employed, in which participants undertook a phonological task once while hearing music, and then again in silence following its presentation.We predicted that the music would be maintained in working memory, interfering with the task. Experiment 1 (N = 30) used songs predicted to be more or less catchy; half of the sample heard truncated versions. Performance was indeed poorer following catchier songs, particularly if the songs were unfinished. Moreover, the effect was stronger for songs rated higher in terms of the desire to sing along. Experiment 2 (N = 50) replicated the effect using songs with which the participants felt compelled to sing along. Additionally, results from a lexical decision task indicated that many participants' keystrokes synchronized with the tempo of the song just heard. Together, these findings suggest that an earworm results from an unconscious desire to sing along to a familiar song.

4 citations


Journal ArticleDOI
TL;DR: The probe tone technique was used to investigate the tonal hierarchy in classical and rock music, and as predicted, the observed profiles for these two styles were structurally similar, reflecting a shared underlying Western tonal structure.
Abstract: Krumhansl and Kessler’s (1982) pioneering experiments on tonal hierarchies in Western music have long been considered the gold standard for researchers interested in the mental representation of musical pitch structure. The current experiment used the probe tone technique to investigate the tonal hierarchy in classical and rock music. As predicted, the observed profiles for these two styles were structurally similar, reflecting a shared underlying Western tonal structure. Most interestingly, however, the rock profile was significantly less differentiated than the classical profile, reflecting theoretical work that describes pitch organization in rock music as more permissive and less hierarchical than in classical music. This line of research contradicts the idea that music from the common-practice era is representative of all Western musics, and challenges music cognition researchers to explore style-appropriate stimuli and models of pitch structure for their experiments.

3 citations


Journal ArticleDOI
TL;DR: In this paper, a corpus analysis of 456 jazz solos using the Weimar Jazz Database is presented, which suggests that most jazz soloists tend to play with only slightly uneven swing eighths (BUR = 1.3:1), while BURs approaching 2:1 and higher are only used occasionally.
Abstract: The most recognizable features of the jazz phrasing style known as “swing” is the articulation of tactus beat subdivisions into long-short patterns (known as “swing eighths”). The subdivisions are traditionally assumed to form a 2:1 beat-upbeat ratio (BUR); however, several smaller case studies have suggested that the 2:1 BUR is a gross oversimplification. Here we offer a more conclusive approach to the issue, offering a corpus analysis of 456 jazz solos using the Weimar Jazz Database. Results indicate that most jazz soloists tend to play with only slightly uneven swing eighths (BUR = 1.3:1), while BURs approaching 2:1 and higher are only used occasionally. High BURs are more likely to be used systematically at slow and moderate tempi and in Postbop and Hardbop styles. Overall, the data suggests that a stable 2:1 swing BUR for solos is a conceptual myth, which may be based on various perceptual effects. We suggest that higher BURs are likely saved for specific effect, since higher BURs may maximize entrainment and the sense of groove at the tactus beat level among listeners and performers. Consequently our results contribute with insights relevant to jazz, groove, and microrhythm studies, practical and historical jazz research, and music perception.

3 citations


Journal ArticleDOI
TL;DR: Among the three primary tonal functions described in modern theory textbooks, the pre-dominant has the highest number of representative chords as mentioned in this paper, and one unifying feature of the predominant function is its attraction to V, and the experiment reported here investigates factors that may contribute to this perception.
Abstract: Among the three primary tonal functions described in modern theory textbooks, the pre-dominant has the highest number of representative chords. We posit that one unifying feature of the pre-dominant function is its attraction to V, and the experiment reported here investigates factors that may contribute to this perception. Participants were junior/senior music majors, freshman music majors, and people from the general population recruited on Prolific.co. In each trial, four Shepard-tone sounds in the key of C were presented: 1) the tonic note, 2) one of 31 different chords, 3) the dominant triad, and 4) the tonic note. Participants rated the strength of attraction between the second and third chords. Across all individuals, diatonic and chromatic pre-dominant chords were rated significantly higher than non-pre-dominant chords and bridge chords. Further, music theory training moderated this relationship, with individuals with more theory training rating pre-dominant chords as being more attracted to the dominant. A final data analysis modeled the role of empirical features of the chords preceding the V chord, finding that chords with roots moving to V down by fifth, chords with less acoustical roughness, and chords with more semitones adjacent to V were all significant predictors of attraction ratings.

Journal ArticleDOI
TL;DR: This paper discuss three fundamental questions underpinning the study of consonance: 1) What features cause a particular chord to be perceived as consonant? 2) How humans evolve the ability to perceive these features? 3) Why did humans evolve to attribute particular aesthetic valences to these features (if they did at all), and conclude that the present evidence is insufficient to distinguish between them, despite what has been claimed in the literature.
Abstract: I discuss three fundamental questions underpinning the study of consonance: 1) What features cause a particular chord to be perceived as consonant? 2) How did humans evolve the ability to perceive these features? 3) Why did humans evolve to attribute particular aesthetic valences to these features (if they did at all)? The first question has been addressed by several recent articles, including Friedman, Kowalewski, Vuvan, and Neill (2021), with the common conclusion that consonance in Western listeners is driven by multiple features such as harmonicity, interference between partials, and familiarity. On this basis, it seems relatively straightforward to answer the second question: each of these consonance features seems to be grounded in fundamental aspects of human auditory perception, such as auditory scene analysis and auditory long-term memory. However, the third question is harder to resolve. I describe several potential answers, and argue that the present evidence is insufficient to distinguish between them, despite what has been claimed in the literature. I conclude by discussing what kinds of future studies might be able to shed light on this problem.

Journal ArticleDOI
TL;DR: Evidence supporting a link between harmoni-city and the attractiveness of simultaneous tone combinations has emerged from an experiment designed to mitigate effects of musical enculturation and its relation to an account of tonal aesthetics based on the biology of auditory-vocal communication is clarified.
Abstract: Evidence supporting a link between harmoni-city and the attractiveness of simultaneous tone combinations has emerged from an experiment designed to mitigate effects of musical enculturation. I examine the analysis undertaken to produce this evidence and clarify its relation to an account of tonal aesthetics based on the biology of auditory-vocal communication.

Journal ArticleDOI
TL;DR: Findings suggested that music listening could strengthen components of the inhibitory descending pain pathways operating at the dorsal spinal cord level.
Abstract: Passive music listening has shown its capacity to soothe pain in several clinical and experimental studies. This phenomenon—known as music-induced analgesia—could partly be explained by the modulation of pain signals in response to the stimulation of brain and brainstem centers. We hypothesized that music-induced analgesia may involve inhibitory descending pain systems. We assessed pain-related responses to endogenous pain control mechanisms known to depend on descending pain modulation: peak of first pain (PP), temporal summation (TS), and diffuse noxious inhibitory control (DNIC). Twenty-seven healthy participants (14 men, 13 women) were exposed to a conditioned pain modulation paradigm during a 20-minute relaxing music session and a silence condition. Pain was continually measured with a visual analogue scale. Pain ratings were significantly lower with music listening (p < .02). Repeated measures ANOVA indicated significant differences between conditions within PP and TS (p < .05) but not in DNIC. Those findings suggested that music listening could strengthen components of the inhibitory descending pain pathways operating at the dorsal spinal cord level.

Journal ArticleDOI
TL;DR: Smit, Milne, Dean, & Weidemann as discussed by the authors corrected some interpretations regarding Friedman et al.'s (2021) discussion of our paper, as well as express some concerns regarding the statistical methods used.
Abstract: In the article “Consonance preferences within an unconventional tuning system,” Friedman and colleagues (2021) examine consonance ratings of a large range of dyads and triads from the Bohlen-Pierce chromatic just (BPCJ) scale. The study is designed as a replication of a recent paper by Bowling, Purves, and Gill (2018), which proposes that perception of consonance in dyads, triads, and tetrads can be predicted by their harmonic similarity to human vocalisations.In this commentary, we would like to correct some interpretations regarding Friedman et al.’s (2021) discussion of our paper (Smit, Milne, Dean, & Weidemann, 2019), as well as express some concerns regarding the statistical methods used. We also propose a stronger emphasis on the use of, as named by Friedman et al., composite models as a range of recent evidence strongly suggests that no single acoustic measure can fully predict the complex experience of consonance.

Journal ArticleDOI
TL;DR: For instance, this paper found that listeners rate highly and have stronger expectations about chord progressions that occur frequently and behave consistently within tonal corpora, and that listeners' harmonic expectations are sensitive to both bass patterns and pitch-class content.
Abstract: This study tests the respective roles of pitch-class content and bass patterns within harmonic expectation using a mix of behavioral and computational experiments. In our first two experiments, participants heard a paradigmatic chord progression derived from music theory textbooks and were asked to rate how well different target endings completed that progression. The completion included the progression’s paradigmatic target, different inversions of that chord (i.e., different members of the harmony were heard in the lowest voice), and a “mismatched” target, a triad that shared its lowest pitch with the paradigmatic ending but altered other pitch-class content. Participants generally rated the paradigmatic target most highly, followed by other inversions of that chord, with lowest ratings generally elicited by the mismatched target. This suggests that listeners’ harmonic expectations are sensitive to both bass patterns and pitch-class content. However, these results did not hold in all cases. A final computational experiment was run to determine whether variations in behavioral responses could be explained by corpus statistics. To this end, n-gram chord-transition models and frequency measurements were compiled for each progression. Our findings suggest that listeners rate highly and have stronger expectations about chord progressions that occur frequently and behave consistently within tonal corpora.

Journal ArticleDOI
TL;DR: In this paper, the authors evaluated the effects of music training using tasks designed to measure simultaneity perception and temporal acuity, and how these are influenced by music training in auditory, visual, and audio-visual conditions.
Abstract: Considerable evidence converges on the plasticity of attention and the possibility that it can be modulated through regular training. Music training, for instance, has been correlated with modulations of early perceptual and attentional processes. However, the extent to which music training can modulate mechanisms involved in processing information (i.e., perception and attention) is still widely unknown, particularly between sensory modalities. If training in one sensory modality can lead to concomitant enhancements in different sensory modalities, then this could be taken as evidence of a supramodal attentional system. Additionally, if trained musicians exhibit improved perceptual skills outside of the domain of music, this could be taken as evidence for the notion of far-transfer, where training in one domain can lead to improvements in another. To investigate this further, we evaluated the effects of music training using tasks designed to measure simultaneity perception and temporal acuity, and how these are influenced by music training in auditory, visual, and audio-visual conditions. Trained musicians showed significant enhancements for simultaneity perception in the visual modality, as well as generally improved temporal acuity, although not in all conditions. Visual cues directing attention influenced simultaneity perception for musicians for visual discrimination and temporal accuracy in auditory discrimination, suggesting that musicians have selective enhancements in temporal discrimination, arguably due to increased attentional efficiency when compared to nonmusicians. Implications for theory and future training studies are discussed.

Journal ArticleDOI
TL;DR: The findings support the robustness of the online tool for objectively measuring singing pitch accuracy beyond a controlled laboratory environment and its potential application in large-scale investigations of singing and music ability.
Abstract: IN THIS STUDY, THE ROBUSTNESS OF AN ONLINE tool for objectively assessing singing ability was examined by: (1) determining the internal consistency and test-retest reliability of the tool; (2) comparing the task performance of web-based participants (n  285) with a group (n  52) completing the tool in a controlled laboratory setting, and then determining the convergent validity between settings, and (3) comparing participants' task performance with previous research using similar singing tasks and populations. Results indicated that the online singing tool exhibited high internal consistency (Cronbach's alpha  .92), and moderate-tohigh test-retest reliabilities (.65-.80) across an average 4.5-year-span. Task performance for web- A nd laboratory-based participants (n  82) matched on age, sex, and music training were not significantly different. Moderate-to-large correlations (|r| Â.31-.59) were found between self-rated singing ability and the various singing tasks, supporting convergent validity. Finally, task performance of the web-based sample was not significantly different to previously reported findings.Overall the findings support the robustness of the online tool for objectively measuring singing pitch accuracy beyond a controlled laboratory environment and its potential application in largescale investigations of singing and music ability.

Journal ArticleDOI
Manda Fischer1, Kit Soden1, Etienne Thoret1, Marcel Montrey1, Stephen McAdams1 
TL;DR: In this article, the authors investigated the effect of timbre perception and auditory grouping on orchestral music segmentation, and found that timbral differences enhanced global segregation between streams.
Abstract: Timbre perception and auditory grouping principles can provide a theoretical basis for aspects of orchestration. In Experiment 1, 36 excerpts contained two streams and 12 contained one stream as determined by music analysts. Streams—the perceptual connecting of successive events—comprised either single instruments or blended combinations of instruments from the same or different families. Musicians and nonmusicians rated the degree of segregation perceived in the excerpts. Heterogeneous instrument combinations between streams yielded greater segregation than did homogeneous ones. Experiment 2 presented the individual streams from each two-stream excerpt. Blend ratings on isolated individual streams from the two-stream excerpts did not predict global segregation between streams. In Experiment 3, Experiment 1 excerpts were reorchestrated with only string instruments to determine the relative contribution of timbre to segregation beyond other musical cues. Decreasing timbral differences reduced segregation ratings. Acoustic and score-based descriptors were extracted from the recordings and scores, respectively, to statistically quantify the factors involved in these effects. Instrument family, part crossing, consonance, spectral factors related to timbre, and onset synchrony all played a role, providing evidence of how timbral differences enhance segregation in orchestral music.


Journal ArticleDOI
TL;DR: The Berlin Gehoerbildung Scale is concluded that the BGS is an adequate measurement instrument for assessing individual differences in music expertise, especially at high levels of expertise.
Abstract: We introduce the Berlin Gehoerbildung Scale (BGS), a multidimensional assessment of music expertise in amateur musicians and music professionals. The BGS is informed by music theory and uses a variety of testing methods in the ear-training tradition, with items covering four different dimensions of music expertise: (1) intervals and scales, (2) dictation, (3) chords and cadences, and (4) complex listening. We validated the test in a sample of amateur musicians, aspiring professional musicians, and students attending a highly competitive music conservatory (n = 59). Using structural equation modeling, we compared two factor models: a unidimensional model postulating a single factor of music expertise; and a hierarchical model, according to which four first-order subscale factors load on a second-order factor of general music expertise. The hierarchical model showed better fit to the data than the unidimensional model, indicating that the four subscales capture reliable variance above and beyond the general factor of music expertise. There were reliable group differences on both the second-order general factor and the four subscales, with music students outperforming aspiring professionals and amateur musicians. We conclude that the BGS is an adequate measurement instrument for assessing individual differences in music expertise, especially at high levels of expertise.

Journal ArticleDOI
TL;DR: The authors compared a model of medieval perceptions of "sweetness" based on writings of medieval music theorists with modern day listeners' aesthetic responses through two experiments: implicit associations and explicit associations.
Abstract: HISTORICAL LISTENING HAS LONG BEEN A TOPIC OF interest for musicologists. Yet, little attention has been given to the systematic study of historical listening practices before the common practice era (c. 1700-present). In the first study of its kind, this research compared a model of medieval perceptions of ''sweetness'' based on writings of medieval music theorists with modern day listeners' aesthetic responses. Responses were collected through two experiments. In an implicit associations experiment, participants were primed with a more or less consonant musical excerpt, then presented with a sweet or bitter target word, or a non-word, on which to make lexical decisions. In the explicit associations experiment, participants were asked to rate on a three-point Likert scale perceived sweetness of short musical excerpts that varied in consonance and sound quality (male, female, organ). The results from these experiments were compared to predictions from a medieval perception model to investigate whether early and modern listeners have similar aesthetic responses. Results from the implicit association test were not consistent with the predictions of the model, however, results from the explicit associations experiment were. These findings indicate the metaphor of sweetness may be useful for comparing the aesthetic responses of medieval and modern listeners.


Journal ArticleDOI
TL;DR: In this article, the authors presented simple nine-note auditory rhythms to 100 college students, who attempted to reproduce those rhythms by tapping and found that the melodically presented rhythms were easier to remember.
Abstract: Some researchers and study participants have expressed an intuition that novel rhythmic sequences are easier to recall and reproduce if they have a melody, implying that melodicity (the presence of musical pitch variation) fundamentally enhances perception and/or representation of rhythm. But the psychoacoustics literature suggests that pitch variation often impairs perception of temporal information. To examine the effect of melodicity on rhythm reproduction accuracy, we presented simple nine-note auditory rhythms to 100 college students, who attempted to reproduce those rhythms by tapping. Reproductions tended to be more accurate when the presented notes all had the same pitch than when the presented notes had a melody. Nonetheless, a plurality of participants judged that the melodically presented rhythms were easier to remember. We also found that sequences containing a Scotch snap (a sixteenth note at a quarter note beat position followed by a dotted eighth note) were reproduced less accurately than other sequences in general, and less accurately than other sequences containing a dotted eighth note.

Journal ArticleDOI
TL;DR: In this paper, a doctoral studentship from the EPSRC and AHRC Centre for Doctoral Training in Media and Arts Technology (CATIT) was supported by a doctoral student.
Abstract: PH was supported by a doctoral studentship from the EPSRC and AHRC Centre for Doctoral Training in Media and Arts Technology (EP/L01632X/1).

Journal ArticleDOI
TL;DR: For instance, this paper found that pre-and post-central gyri and precuneus were more active during build-up and drop passages in EDM break-robin and the inferior frontal gyrus (IFG) was more active within drop passages.
Abstract: Previous brain-related studies on music-evoked emotions have relied on listening to long music segments, which may reduce the precision of correlating emotional cues to specific brain areas. Break routines in electronic dance music (EDM) are emotive but short music moments containing three passages: breakdown, build-up, and drop. Within build-ups music features increase to peak moments prior to highly expected drop passages and peak-pleasurable emotions when these expectations are fulfilled. The neural correlates of peak-pleasurable emotions (such as excitement) in the short seconds of build-up and drop passages in EDM break routines are therefore good candidates to study brain correlates of emotion. Thirty-six participants listened to break routines while undergoing continuous EEG. Source reconstruction of EEG epochs for one second of build-up and of drop passages showed that pre- and post-central gyri and precuneus were more active during build-ups, and the inferior frontal gyrus (IFG) and middle frontal gyrus (MFG) were more active within drop passages. Importantly, IFG and MFG activity showed a correlation with ratings of subjective excitement during drop passages. The results suggest expectation is important in inducing peak-pleasurable experiences and brain activity changes within seconds of reported feelings of excitement during EDM break routines.



Journal ArticleDOI
TL;DR: In this article, the authors compared stimuli with TV and those having loudness vibrato (LV) to match a reference vibrato tone and found good linear sensitivity to vibrato depth regardless of vibrato type, but also some poorly understood findings between physical signal and perception of TV.
Abstract: In music, vibrato consists of cyclic variations in pitch, loudness, or spectral envelope (hereafter, “timbre vibrato”—TV) or combinations of these. Here, stimuli with TV were compared with those having loudness vibrato (LV). In Experiment 1, participants chose from tones with different vibrato depth to match a reference vibrato tone. When matching to tones with the same vibrato type, 70% of the variance was explained by linear matching of depth. Less variance (40%) was explained when matching dissimilar vibrato types. Fluctuations in loudness were perceived as approximately the same depth as fluctuations in spectral envelope (i.e., about 1.3 times deeper than fluctuations in spectral centroid). In Experiment 2, participants matched a reference with test stimuli of varying depths and types. When the depths of the test and reference tones were similar, the same type was usually selected, over the range of vibrato depths. For very disparate depths, matches were made by type only about 50% of the time. The study revealed good, fairly linear sensitivity to vibrato depth regardless of vibrato type, but also some poorly understood findings between physical signal and perception of TV, suggesting that more research is needed in TV perception.

Journal ArticleDOI
TL;DR: The authors found that the correlation between harmonicity and spectral interference (SI) was only moderate at best for Western chords, with a variance in variance under 1.26, close to their lower bound.
Abstract: T HE ORIGINS OF TONAL CONSONANCE—THE tendency to perceive some simultaneously sounded combinations of musical tones as more pleasant than others—is arguably among the most fundamental questions in music perception. For more than a century, the issue has been the subject of vigorous debate, undoubtedly fueled by the formidable complexities involved in investigating music-induced affective qualia that are not directly observable and often ineffable. The challenge of drawing definitive conclusions in this area of inquiry is well exemplified by the markedly divergent, yet equally thoughtful, responses offered in these commentaries. According to Bowling, our findings are an important source of converging evidence for his Vocal Similarity Hypothesis (VSH), the notion that consonance derives from an evolved preference for harmonic vocal sounds (Bowling, Purves, & Gill, 2018). However, he suggests that our interpretation of the results may cast a less favorable light on the VSH than is warranted. For example, he is skeptical of our contention that spectral interference (SI) accounts for greater variance in consonance judgments than harmonicity, arguing that the high correlation between these predictors ‘‘present[s] a problem for their separation via regression.’’ Yet, upon examination, the correlations between the harmonicity and SI measures that we used in our regression analyses were only moderate at best for our unconventional chord stimuli (-.54). Moreover, a Variance Inflation Factor analysis (Chatterjee & Price, 2012) for all four relevant regressions yields values under 1.26, close to their lower bound. This suggests that the precision of our regression coefficients was not likely to have been diminished due to multicollinearity. Our conclusion regarding the relative strength of the impact of SI on consonance ratings gains further credence from the work of Harrison and Pearce (2020), who reported analogous findings based on a reanalysis of four different behavioral datasets using conventional chords. Nevertheless, we agree with Bowling that consonance researchers should be wary of multicollinearity when comparing the predictive utility of different musical features, as certain harmonicity or SI metrics may indeed share substantial variance (see e.g., Bowling, this issue, Figure 2). Whereas Bowling suggests that our analysis and study design may have sold the VSH short by underweighting the contribution of harmonicity to consonance, both Smit and Milne as well as Harrison argue the opposite, proposing that we may have oversold the extent to which our findings support the VSH. Indeed, Harrison argues that our results leave open at least two alternative hypotheses: First, harmonicity may be preferred, not due to an evolved preference for voice-like sounds, but because harmonicity facilitates the identification of distinct auditory sources in the environment. Second, a preference for harmonic sounds may have evolved not because it reinforced attention to conspecific vocal communications (as posited by the VSH; Bowling et al., 2018), but because it reinforced social bonding via collective music making. Although critical details of these alternative accounts remain to be clarified, we agree that our results do not ‘‘support’’ the VSH in the strong sense of confirming it empirically. As we noted in our article, the primary goal of our study was to rule out the possibility that the association between consonance and harmonicity shown in Western chords was an artifact of familiarity. Our results suggest that this was unlikely to have been the case. In the absence of such evidence, the viability of the VSH would have been in grave doubt. In line with Harrison’s assessment, we concur that it will be enormously challenging to find ‘‘positive’’ evidence of an evolved preference for voice-like sounds, assuming it does exist (cf. McDermott, Schultz, Underraga, & Godoy, 2016). As noted by Bowling (this issue), ‘‘the auditory system receives harmonic stimulation from mother’s larynx as soon as it comes on-line,’’ making it difficult to determine whether a preference for harmonic chords derives from our evolutionary