scispace - formally typeset
Search or ask a question
Topic

Closed captioning

About: Closed captioning is a research topic. Over the lifetime, 3011 publications have been published within this topic receiving 64494 citations. The topic is also known as: CC.


Papers
More filters
Patent
30 Jan 2014
TL;DR: In this paper, a method for synchronized utilization of an electronic device is presented, which receives closed captioning data from an audio/video content receiver for a set of audio and video content; retrieves detail for a first event occurring in the set of content; and presents content to a user, using the electronic device based on the retrieved detail.
Abstract: A method for synchronized utilization of an electronic device is provided. The method receives closed captioning data from an audio/video content receiver for a set of audio/video content; retrieves detail for a first event occurring in the set of audio/video content, wherein the first event is indicated by the received closed captioning data; and presents content to a user, using the electronic device, based on the retrieved detail.

17 citations

Proceedings ArticleDOI
01 Oct 2014
TL;DR: A web-based crowdsourcing editor to add or correct captions for video lectures and its findings indicate that this innovative crowdsourcing tool is effective and efficient for captioning lecture videos and has considerable value in educational practice.
Abstract: Video of a classroom lecture has been shown to be a versatile learning resource comparable to a textbook. Captions in videos are highly valued by students, especially those with hearing disability and those whose first language is not English. Captioning by automatic speech recognition (ASR) tools is of limited use because of low and variable accuracy. Manual captioning with existing tools is a slow, tedious and expensive task. In this work, we present a web-based crowdsourcing editor to add or correct captions for video lectures. The editor allows a group, e.g., students in a class, to correct the captions for different parts of a video lecture simultaneously. Users can review and correct each other's work. The caption editor has been successfully employed to caption STEM coursework videos. Our findings based on survey results and interviews indicate that this innovative crowdsourcing tool is effective and efficient for captioning lecture videos and has considerable value in educational practice. The caption editor is integrated with Indexed Captioned Searchable (ICS) Videos framework at University of Houston that has been used by dozens of courses and 1000s of students. The ICS Videos framework including the captioning tool is open source software available to educational institutions.

17 citations

Posted Content
TL;DR: A scalable method to automatically generate diverse audio for image captioning datasets via a dual encoder that learns to align latent representations from both modalities is described and it is shown that a masked margin softmax loss for such models is superior to the standard triplet loss.
Abstract: Systems that can associate images with their spoken audio captions are an important step towards visually grounded language learning. We describe a scalable method to automatically generate diverse audio for image captioning datasets. This supports pretraining deep networks for encoding both audio and images, which we do via a dual encoder that learns to align latent representations from both modalities. We show that a masked margin softmax loss for such models is superior to the standard triplet loss. We fine-tune these models on the Flickr8k Audio Captions Corpus and obtain state-of-the-art results---improving recall in the top 10 from 29.6% to 49.5%. We also obtain human ratings on retrieval outputs to better assess the impact of incidentally matching image-caption pairs that were not associated in the data, finding that automatic evaluation substantially underestimates the quality of the retrieved results.

17 citations

Journal ArticleDOI
Cong Bai1, Anqi Zheng1, Yuan Huang1, Xiang Pan1, Nan Chen 
01 Dec 2021-Displays
TL;DR: A framework using a CNN-based generation model to generate image captions with the help of conditional generative adversarial training (CGAN) and multi-modal graph convolution network (MGCN) is used to exploit visual relationships between objects for generating the caption with semantic meanings.

17 citations

Posted Content
TL;DR: This work introduces a novel factorization of the latent space, termed context-object split, to model diversity in contextual descriptions across images and texts within the dataset, and extends this to images with novel objects and without paired captions in the training data.
Abstract: Diverse image captioning models aim to learn one-to-many mappings that are innate to cross-domain datasets, such as of images and texts. Current methods for this task are based on generative latent variable models, e.g. VAEs with structured latent spaces. Yet, the amount of multimodality captured by prior work is limited to that of the paired training data -- the true diversity of the underlying generative process is not fully captured. To address this limitation, we leverage the contextual descriptions in the dataset that explain similar contexts in different visual scenes. To this end, we introduce a novel factorization of the latent space, termed context-object split, to model diversity in contextual descriptions across images and texts within the dataset. Our framework not only enables diverse captioning through context-based pseudo supervision, but extends this to images with novel objects and without paired captions in the training data. We evaluate our COS-CVAE approach on the standard COCO dataset and on the held-out COCO dataset consisting of images with novel objects, showing significant gains in accuracy and diversity.

17 citations


Network Information
Related Topics (5)
Feature vector
48.8K papers, 954.4K citations
83% related
Object detection
46.1K papers, 1.3M citations
82% related
Convolutional neural network
74.7K papers, 2M citations
82% related
Deep learning
79.8K papers, 2.1M citations
82% related
Unsupervised learning
22.7K papers, 1M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023536
20221,030
2021504
2020530
2019448
2018334