Toward an infrastructure for data-driven multimodal communication research
read more
Citations
Rethinking Context Language As An Interactive Phenomenon
The Role of Creativity in Multimodal Construction Grammar
Quantifying the speech-gesture relation with massive multimodal datasets: Informativity in time expressions.
Temporal Expressions in English and Spanish: Influence of Typology and Metaphorical Construal.
References
Rethinking Context: Language As An Interactive Phenomenon
Rethinking Context Language As An Interactive Phenomenon
CQPweb — combining power, flexibility and usability in a corpus analysis tool
Related Papers (5)
Frequently Asked Questions (15)
Q2. What is the way to analyze video recordings?
video recordings can be submitted to a sketch filter, which removes textures critical to personal identification, yet retains structural elements of multimodal communication (Diemer et al. 2016).
Q3. What are some of the detectors that are used in Red Hen?
Some of these detectors make use of machine learning models that are learned from data using supervised or unsupervised learning methods.
Q4. How can you access the metadata and annotations?
The metadata and annotations, along with the video and audio, can be accessed by Red Hen members through the Edge search engine (available via newsscape.library.ucla.edu), which provides an easy and userfriendly web-based user interface.
Q5. Why are captions created on the fly?
Because television captions are typically created on the fly by professional captioners, they lag behind the speech and video stream by a low but variable number of seconds.
Q6. What is the common way to analyze the structure of English text?
The SEMAFOR project (http://www.ark.cs.cmu.edu/SEMAFOR) performs an automatic analysis of the framesemantic structure of English text, using the FrameNet 1.5 release.
Q7. What is the main idea behind the use of tagged and searchable multimodal big data?
The availability of tagged and searchable multimodal big data opens up new opportunities for linguistics research, extending the utility of large corpora noted by Davies (2015).
Q8. What are the main features of Red Hen’s annotation process?
Red Hen’s infrastructure and tools also permit the incorporation of existing datasets, such as handcrafted collections of experimental data.
Q9. What is the purpose of this paper?
In this paper, the authors describe the Distributed Little Red Hen Lab, a global laboratory and consortium designed to facilitate large-scale collaborative research into multimodal communication.
Q10. Why are the annotations in red hen difficult to detect?
Imperfections arise because the structure of frames in the videos in Red Hen is very complex, so that it is often extremely difficult to detect precisely small motions or parts of the body.
Q11. What is the way to align the text with the audio?
Red Hen uses the open-source Gentle project (lowerquality.com/gentle) to align the text with the audio, generating precise timestamps for each word.
Q12. What is the purpose of the Red Hen search interface?
Red Hen provides a search interface (Figure 1) aligned to this need, developed collaboratively by linguists and computer scientists on their team, an example of the kind of interdisciplinary collaboration common in Red Hen.
Q13. What is the principle of the archive?
In principle, any recording in any format of any human communication is suitable for inclusion in the archive, which consists of networked data across the Red Hen cooperative, either natively digital or converted to digital form.
Q14. What is the name of the archive?
It is an official archive of the University of California, Los Angeles (UCLA) Library, the digital continuation of UCLA’s Communication Studies Archive, initiated by Paul Rosenthal in 1972.
Q15. What is the main purpose of Red Hen?
Access to the Red Hen tools and data are provided through the project website (redhenlab.org), where researchers can both access data and contribute or provide feedback to the Red Hen project.