Open Access
On the Correlation between Perceptual and Contextual Aspects of Laughter in Meetings
Kornel Laskowski,Susanne Burger +1 more
Reads0
Chats0
TLDR
This article analyzed over 13000 bouts of laughter, in over 65 hours of unscripted, naturally occurring multiparty meetings, to identify discriminative contexts of voiced and unvoiced laughter.Abstract:
We have analyzed over 13000 bouts of laughter, in over 65 hours of unscripted, naturally occurring multiparty meetings, to identify discriminative contexts of voiced and unvoiced laughter Our results show that, in meetings, laughter is quite frequent, accounting for almost 10% of all vocal activity effort by time Approximately a third of all laughter is unvoiced, but meeting participants vary extensively in how often they employ voicing during laughter In spite of this variability, laughter appears to exhibit robust temporal characteristics Voiced laughs are on average longer than unvoiced laughs, and appear to correlate with temporally adjacent voiced laughter from other participants, as well as with speech from the laugher Unvoiced laughter appears to occur independently of vocal activity from other participantsread more
Citations
More filters
Laughter annotations in conversational speech corpora - possibilities and limitations for phonetic analysis
Khiet P. Truong,Jürgen Trouvain +1 more
TL;DR: It is found that overlapping laughs are longer in duration and are generally more voiced than non-overlapping laughs, and for a finer-grained acoustic analysis, a manual re-labeling of the laughs adhering to a more standardized laughter annotations protocol would be optimal.
Proceedings ArticleDOI
Contrasting emotion-bearing laughter types in multiparticipant vocal activity detection for meetings
TL;DR: Evidence which suggests that ignoring unvoicing laughter improves the prediction of emotional involvement in collocated speech is presented, making a case for the distinction between voiced and unvoiced laughter during laughter detection.
Book ChapterDOI
Detection of Laughter-in-Interaction in Multichannel Close-Talk Microphone Recordings of Meetings
Kornel Laskowski,Tanja Schultz +1 more
TL;DR: A system for the detection of laughter, and its attribution to specific participants, which relies on simultaneously decoding the vocal activity of all participants given multi-channel recordings is presented, allowing for disambiguate laughter and speech not only acoustically, but also by constraining the number of simultaneous speakers.
Journal ArticleDOI
Developing a social functional account of laughter
Laughter in Social Robotics – no laughing matter
TL;DR: In this paper, the authors investigated the effects of laughter when combined with an android's motion and presented to uninformed participants, during playful interaction with another human, and found that the social effect of laughter heavily depends on at least the following three factors: First, the situational context, which is not only determined by the task at hand, but also by linguistic content as well as non-verbal expressions; second, the type and quality of laughter synthesis in combination with an artificial laugher's outer appearance; and third, the interaction dynamics, which are partly depending on a perce
References
More filters
Journal ArticleDOI
A Coefficient of agreement for nominal Scales
TL;DR: In this article, the authors present a procedure for having two or more judges independently categorize a sample of units and determine the degree, significance, and significance of the units. But they do not discuss the extent to which these judgments are reproducible, i.e., reliable.
Journal ArticleDOI
Induction of Decision Trees
TL;DR: In this paper, an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail, is described, and a reported shortcoming of the basic algorithm is discussed.
Induction of Decision Trees
TL;DR: In this paper, an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail, is described, and a reported shortcoming of the basic algorithm is discussed.
Proceedings ArticleDOI
The ICSI Meeting Corpus
Adam Janin,Don Baron,Jane A. Edwards,Daniel P. W. Ellis,David Gelbart,Nelson Morgan,Barbara Peskin,Thilo Pfau,Elizabeth Shriberg,Andreas Stolcke,Chuck Wooters +10 more
TL;DR: A corpus of data from natural meetings that occurred at the International Computer Science Institute in Berkeley, California over the last three years is collected, which supports work in automatic speech recognition, noise robustness, dialog modeling, prosody, rich transcription, information retrieval, and more.
Book ChapterDOI
The AMI meeting corpus: a pre-announcement
Jean Carletta,Simone Ashby,Sebastien Bourban,Michael J. Flynn,Maël Guillemot,Thomas Hain,Jaroslav Kadlec,Vasilis Karaiskos,Wessel Kraaij,Melissa Kronenthal,Guillaume Lathoud,Mike Lincoln,Agnes Lisowska,Iain McCowan,Wilfried Post,Dennis Reidsma,Pierre Wellner +16 more
TL;DR: The AMI Meeting Corpus as mentioned in this paper is a multi-modal data set consisting of 100 hours of meeting recordings, which is being created in the context of a project that is developing meeting browsing technology and will eventually be released publicly.