scispace - formally typeset
Open AccessProceedings Article

Multimedia Database of Meetings and Informal Interactions for Tracking Participant Involvement and Discourse Flow

Reads0
Chats0
TLDR
The resulting corpus of speech and video data which is being collected for the above research currently includes data from 12 monthly sessions, comprising 71 video and 33 audio modules.
Abstract
At ATR, we are collecting and analysing “meetings” data using a table-top sensor device consisting of a small 360-degree camera surrounded by an array of high-quality directional microphones This equipment provides a stream of information about the audio and visual events of the meeting which is then processed to form a representation of the verbal and non-verbal interpersonal activity, or discourse flow, during the meeting This paper describes the resulting corpus of speech and video data which is being collected for the abovere search It currently includes data from 12 monthly sessions, comprising 71 video and 33 audio modules Collection is continuingmonthly and is scheduled to include another ten sessions

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Bridging the Gap between Social Animal and Unsocial Machine: A Survey of Social Signal Processing

TL;DR: This is the first survey of the domain that jointly considers its three major aspects, namely, modeling, analysis, and synthesis of social behavior, which investigates laws and principles underlying social interaction, and explores approaches for automatic understanding of social exchanges recorded with different sensors.
Journal ArticleDOI

Automatic nonverbal analysis of social interaction in small groups

TL;DR: This paper reviews the existing literature on automatic analysis of small group conversations using nonverbal communication, and aims at bridging the current fragmentation of the work in this domain, currently split among half a dozen technical communities.
Journal ArticleDOI

Emergent leaders through looking and speaking: from audio-visual data to multimodal recognition

TL;DR: It is indicated that emergent leadership is related, but not equivalent, to dominance, and while multimodal features bring a moderate degree of effectiveness in inferring the leader, much simpler features extracted from the audio channel are found to give better performance.
Book ChapterDOI

Robust real time face tracking for the analysis of human behaviour

TL;DR: This system reliably detects more than 97% of the faces across several one-hour videos of unconstrained meetings, both indoor and outdoor, while keeping a very low false-positive rate (<0.05%) and without changes in parameters.

An Audio Visual Corpus for Emergent Leader Analysis

TL;DR: This paper discusses the experience designing and collecting a data corpus called ELEA (Emergent LEader Analysis), and describes the use of a light portable scenario to record small group meetings.
References
More filters
Proceedings ArticleDOI

The ICSI Meeting Corpus

TL;DR: A corpus of data from natural meetings that occurred at the International Computer Science Institute in Berkeley, California over the last three years is collected, which supports work in automatic speech recognition, noise robustness, dialog modeling, prosody, rich transcription, information retrieval, and more.
Book ChapterDOI

The AMI meeting corpus: a pre-announcement

TL;DR: The AMI Meeting Corpus as mentioned in this paper is a multi-modal data set consisting of 100 hours of meeting recordings, which is being created in the context of a project that is developing meeting browsing technology and will eventually be released publicly.
Journal ArticleDOI

Movement coordination in social interaction: Some examples described

Adam Kendon
- 01 Jan 1970 - 
TL;DR: It is found that the flow of movement in the listener may be rhythmically coordinated with the speech and movements of the speaker, and how the way in which individuals may be in synchrony with one another can vary is shown.

Automatic Analysis of Multimodal Group Actions in Meetings

TL;DR: In this paper, a statistical framework is proposed in which group actions result from the interactions of the individual participants, and the group actions are modelled using different HMM-based approaches, where the observations are provided by a set of audio-visual features monitoring the actions of individuals.
ReportDOI

The ICSI Meeting Recorder Dialog Act (MRDA) Corpus

TL;DR: A new corpus of over 180,000 hand- annotated dialog act tags and accompanying adjacency pair annotations for roughly 72 hours of speech from 75 naturally-occurring meetings is described.
Related Papers (5)