scispace - formally typeset
Search or ask a question
Author

Steven R. Livingstone

Bio: Steven R. Livingstone is an academic researcher from Ryerson University. The author has contributed to research in topics: Facial expression & Music psychology. The author has an hindex of 20, co-authored 27 publications receiving 1226 citations. Previous affiliations of Steven R. Livingstone include Macquarie University & University of Queensland.

Papers
More filters
Journal ArticleDOI
16 May 2018-PLOS ONE
TL;DR: The RAVDESS is a validated multimodal database of emotional speech and song consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent, which shows high levels of emotional validity and test-retest intrarater reliability.
Abstract: The RAVDESS is a validated multimodal database of emotional speech and song. The database is gender balanced consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent. Speech includes calm, happy, sad, angry, fearful, surprise, and disgust expressions, and song contains calm, happy, sad, angry, and fearful emotions. Each expression is produced at two levels of emotional intensity, with an additional neutral expression. All conditions are available in face-and-voice, face-only, and voice-only formats. The set of 7356 recordings were each rated 10 times on emotional validity, intensity, and genuineness. Ratings were provided by 247 individuals who were characteristic of untrained research participants from North America. A further set of 72 participants provided test-retest data. High levels of emotional validity and test-retest intrarater reliability were reported. Corrected accuracy and composite "goodness" measures are presented to assist researchers in the selection of stimuli. All recordings are made freely available under a Creative Commons license and can be downloaded at https://doi.org/10.5281/zenodo.1188976.

1,036 citations

Journal ArticleDOI
TL;DR: It is demonstrated that musician assigned as leaders affect other performers more than musicians assigned as followers, and information sharing in a nonverbal joint action task occurs through both auditory and visual cues.
Abstract: The cultural and technological achievements of the human species depend on complex social interactions. Nonverbal interpersonal coordination, or joint action, is a crucial element of social interaction, but the dynamics of nonverbal information flow among people are not well understood. We used joint music making in string quartets, a complex, naturalistic nonverbal behavior, as a model system. Using motion capture, we recorded body sway simultaneously in four musicians, which reflected real-time interpersonal information sharing. We used Granger causality to analyze predictive relationships among the motion time series of the players to determine the magnitude and direction of information flow among the players. We experimentally manipulated which musician was the leader (followers were not informed who was leading) and whether they could see each other, to investigate how these variables affect information flow. We found that assigned leaders exerted significantly greater influence on others and were less influenced by others compared with followers. This effect was present, whether or not they could see each other, but was enhanced with visual information, indicating that visual as well as auditory information is used in musical coordination. Importantly, performers' ratings of the "goodness" of their performances were positively correlated with the overall degree of body sway coupling, indicating that communication through body sway reflects perceived performance success. These results confirm that information sharing in a nonverbal joint action task occurs through both auditory and visual cues and that the dynamics of information flow are affected by changing group relationships.

83 citations

Journal ArticleDOI
TL;DR: CMERS, a Computational Music Emotion Rule System for the control of perceived musical emotion that modifies features at the levels of score and performance in real-time is presented.
Abstract: When communicating emotion in music, composers and performers encode their expressive intentions through the control of basic musical features such as: pitch, loudness, timbre, mode, and articulation. The extent to which emotion can be controlled through the systematic manipulation of these features has not been fully examined. In this paper we present CMERS, a Computational Music Emotion Rule System for the control of perceived musical emotion that modifies features at the levels of score and performance in real-time. CMERS performance was evaluated in two rounds of perceptual testing. In experiment I, 20 participants continuously rated the perceived emotion of 15 music samples generated by CMERS. Three music works, each with five emotional variations were used (normal, happy, sad, angry, and tender). The intended emotion by CMERS was correctly identified 78% of the time, with significant shifts in valence and arousal also recorded, regardless of the works’ original emotion.

72 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present an alternative account in which music originated from a more general general evolutionary process, in which they argue that it is commonly argued that music originated in human evolution as an adaptation to selective pressures.
Abstract: It is commonly argued that music originated in human evolution as an adaptation to selective pressures In this paper we present an alternative account in which music originated from a more general

71 citations

Journal ArticleDOI
TL;DR: In this article, seven participants were recorded with motion capture as they watched and imitated phrases of emotional singing, paying close attention to the emotion expressed, and facial expressions were monitored during four epochs: (a) during the target, (b) prior to their imitation, (c) during their imitation and (d) after their imitation.
Abstract: FACIAL EXPRESSIONS ARE USED IN MUSIC PERFORMANCE to communicate structural and emotional intentions. Exposure to emotional facial expressions also may lead to subtle facial movements that mirror those expressions. Seven participants were recorded with motion capture as they watched and imitated phrases of emotional singing. Four different participants were recorded using facial electromyography (EMG) while performing the same task. Participants saw and heard recordings of musical phrases sung with happy, sad, and neutral emotional connotations. They then imitated the target stimulus, paying close attention to the emotion expressed. Facial expressions were monitored during four epochs: (a) during the target; (b) prior to their imitation; (c) during their imitation; and (d) after their imitation. Expressive activity was observed in all epochs, implicating a role of facial expressions in the perception, planning, production, and post-production of emotional singing.

63 citations


Cited by
More filters
Journal ArticleDOI
01 Jun 1959

3,442 citations

Journal ArticleDOI
TL;DR: The requirements that context modelling and reasoning techniques should meet are discussed, including the modelling of a variety ofcontext information types and their relationships, of situations as abstractions of context information facts, of histories of contextInformation, and of uncertainty of context Information.

1,201 citations

Journal ArticleDOI
16 May 2018-PLOS ONE
TL;DR: The RAVDESS is a validated multimodal database of emotional speech and song consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent, which shows high levels of emotional validity and test-retest intrarater reliability.
Abstract: The RAVDESS is a validated multimodal database of emotional speech and song. The database is gender balanced consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent. Speech includes calm, happy, sad, angry, fearful, surprise, and disgust expressions, and song contains calm, happy, sad, angry, and fearful emotions. Each expression is produced at two levels of emotional intensity, with an additional neutral expression. All conditions are available in face-and-voice, face-only, and voice-only formats. The set of 7356 recordings were each rated 10 times on emotional validity, intensity, and genuineness. Ratings were provided by 247 individuals who were characteristic of untrained research participants from North America. A further set of 72 participants provided test-retest data. High levels of emotional validity and test-retest intrarater reliability were reported. Corrected accuracy and composite "goodness" measures are presented to assist researchers in the selection of stimuli. All recordings are made freely available under a Creative Commons license and can be downloaded at https://doi.org/10.5281/zenodo.1188976.

1,036 citations

Journal ArticleDOI

717 citations