scispace - formally typeset
S

Steven R. Livingstone

Researcher at Ryerson University

Publications -  27
Citations -  1951

Steven R. Livingstone is an academic researcher from Ryerson University. The author has contributed to research in topics: Facial expression & Music psychology. The author has an hindex of 20, co-authored 27 publications receiving 1226 citations. Previous affiliations of Steven R. Livingstone include Macquarie University & University of Queensland.

Papers
More filters
Journal ArticleDOI

The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English

TL;DR: The RAVDESS is a validated multimodal database of emotional speech and song consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent, which shows high levels of emotional validity and test-retest intrarater reliability.
Journal ArticleDOI

Body sway reflects leadership in joint music performance

TL;DR: It is demonstrated that musician assigned as leaders affect other performers more than musicians assigned as followers, and information sharing in a nonverbal joint action task occurs through both auditory and visual cues.
Journal ArticleDOI

Changing musical emotion: A computational rule system for modifying score and performance

TL;DR: CMERS, a Computational Music Emotion Rule System for the control of perceived musical emotion that modifies features at the levels of score and performance in real-time is presented.
Journal ArticleDOI

The emergence of music from the Theory of Mind

TL;DR: In this paper, the authors present an alternative account in which music originated from a more general general evolutionary process, in which they argue that it is commonly argued that music originated in human evolution as an adaptation to selective pressures.
Journal ArticleDOI

Facial expressions and emotional singing : a study of perception and production with motion capture and electromyography

TL;DR: In this article, seven participants were recorded with motion capture as they watched and imitated phrases of emotional singing, paying close attention to the emotion expressed, and facial expressions were monitored during four epochs: (a) during the target, (b) prior to their imitation, (c) during their imitation and (d) after their imitation.