scispace - formally typeset
G

Gunnar A. Sigurdsson

Researcher at Carnegie Mellon University

Publications -  32
Citations -  2119

Gunnar A. Sigurdsson is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Computer science & Asynchronous communication. The author has an hindex of 14, co-authored 21 publications receiving 1551 citations. Previous affiliations of Gunnar A. Sigurdsson include Johns Hopkins University.

Papers
More filters
Book ChapterDOI

Hollywood in Homes: Crowdsourcing Data Collection for Activity Understanding

TL;DR: This work proposes a novel Hollywood in Homes approach to collect data, collecting a new dataset, Charades, with hundreds of people recording videos in their own homes, acting out casual everyday activities, and evaluates and provides baseline results for several tasks including action recognition and automatic description generation.
Posted Content

Hollywood in Homes: Crowdsourcing Data Collection for Activity Understanding

TL;DR: Charades as discussed by the authors is a collection of 9,848 annotated videos with an average length of 30 seconds, showing activities of 267 people from three continents, each video is annotated by multiple free-text descriptions, action labels, action intervals and classes of interacted objects.
Proceedings ArticleDOI

Asynchronous Temporal Fields for Action Recognition

TL;DR: This work proposes a fully-connected temporal CRF model for reasoning over various aspects of activities that includes objects, actions, and intentions, where the potentials are predicted by a deep network.
Proceedings ArticleDOI

Actor and Observer: Joint Modeling of First and Third-Person Videos

TL;DR: Charades-Ego is introduced, a large-scale dataset of paired first-person and third-person videos, involving 112 people, with 4000 paired videos, which enables learning the link between the two, actor and observer perspectives, and addresses one of the biggest bottlenecks facing egocentric vision research.
Posted Content

Charades-Ego: A Large-Scale Dataset of Paired Third and First Person Videos.

TL;DR: Charades-Ego has temporal annotations and textual descriptions, making it suitable for egocentric video classification, localization, captioning, and new tasks utilizing the cross-modal nature of the data.