scispace - formally typeset
Search or ask a question
Topic

Closed captioning

About: Closed captioning is a research topic. Over the lifetime, 3011 publications have been published within this topic receiving 64494 citations. The topic is also known as: CC.


Papers
More filters
Book ChapterDOI
13 Oct 2019
TL;DR: A hierarchical Transformer based medical imaging report generation model that has improved the performance in BLEU-1 by more than 50% compared with other state-of-the-art image captioning methods.
Abstract: Computerized medical image report generation is of great significance in automating the workflow of medical diagnosis and treatment for reducing health disparities. However, this task presents several challenges, where the generated medical image report should be precise, coherent and contain heterogeneous information. Current deep learning based medical image captioning models rely on recurrent neural networks and only extract top-down visual features, which make them slow and prone to generate incoherent and hard to comprehend reports. To tackle this challenging problem, this paper proposes a hierarchical Transformer based medical imaging report generation model. Our proposed model consists of two parts: (1) An Image Encoder extracts heuristic visual features by a bottom-up attention mechanism; (2) a non-recurrent Captioning Decoder improves the computational efficiency by parallel computation. The former identifies regions of interest via a bottom-up attention module and extracts top-down visual features. Then the Transformer based captioning decoder generates a coherent paragraph of medical imaging report. The proposed model is trained by using a self-critical reinforcement learning method. We evaluate the proposed model on publicly available datasets of IU X-ray. The experiment results show that our proposed model has improved the performance in BLEU-1 by more than 50% compared with other state-of-the-art image captioning methods.

47 citations

Proceedings ArticleDOI
01 Jan 2015
TL;DR: A new approach to harvesting a large-scale, high quality image-caption corpus that makes a better use of already existing web data with no additional human efforts is presented, focusing on Deja Image-Captions: naturally existing image descriptions that are repeated almost verbatim – by more than one individual for different images.
Abstract: We present a new approach to harvesting a large-scale, high quality image-caption corpus that makes a better use of already existing web data with no additional human efforts. The key idea is to focus on Deja Image-Captions: naturally existing image descriptions that are repeated almost verbatim – by more than one individual for different images. The resulting corpus provides association structure between 4 million images with 180K unique captions, capturing a rich spectrum of everyday narratives including figurative and pragmatic language. Exploring the use of the new corpus, we also present new conceptual tasks of visually situated paraphrasing, creative image captioning, and creative visual paraphrasing.

47 citations

Proceedings ArticleDOI
01 Jul 2019
TL;DR: A framework to extract procedures by a cross-modality module, which fuses video content with the entire transcript; and generate captions by encoding video frames as well as a snippet of transcripts within each extracted procedure is introduced.
Abstract: Understanding narrated instructional videos is important for both research and real-world web applications. Motivated by video dense captioning, we propose a model to generate procedure captions from narrated instructional videos which are a sequence of step-wise clips with description. Previous works on video dense captioning learn video segments and generate captions without considering transcripts. We argue that transcripts in narrated instructional videos can enhance video representation by providing fine-grained complimentary and semantic textual information. In this paper, we introduce a framework to (1) extract procedures by a cross-modality module, which fuses video content with the entire transcript; and (2) generate captions by encoding video frames as well as a snippet of transcripts within each extracted procedure. Experiments show that our model can achieve state-of-the-art performance in procedure extraction and captioning, and the ablation studies demonstrate that both the video frames and the transcripts are important for the task.

47 citations

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper propose a reconstruction network (RecNet) in a novel encoder-decoder-reconstructor architecture, which leverages both forward and backward flows for video captioning.
Abstract: In this paper, the problem of describing visual contents of a video sequence with natural language is addressed. Unlike previous video captioning work mainly exploiting the cues of video contents to make a language description, we propose a reconstruction network (RecNet) in a novel encoder-decoder-reconstructor architecture, which leverages both forward (video to sentence) and backward (sentence to video) flows for video captioning. Specifically, the encoder-decoder component makes use of the forward flow to produce a sentence based on the encoded video semantic features. Two types of reconstructors are subsequently proposed to employ the backward flow and reproduce the video features from local and global perspectives, respectively, capitalizing on the hidden state sequence generated by the decoder. Moreover, in order to make a comprehensive reconstruction of the video features, we propose to fuse the two types of reconstructors together. The generation loss yielded by the encoder-decoder component and the reconstruction loss introduced by the reconstructor are jointly cast into training the proposed RecNet in an end-to-end fashion. Furthermore, the RecNet is fine-tuned by CIDEr optimization via reinforcement learning, which significantly boosts the captioning performance. Experimental results on benchmark datasets demonstrate that the proposed reconstructor can boost the performance of video captioning consistently.

47 citations

Patent
06 Mar 2001
TL;DR: In this paper, a speech-to-text processing system coupled with a signal separation processor and a signal combination processor is proposed for providing automated captioning for video broadcasts contained in audio signals.
Abstract: System, method and computer-readable medium containing instructions for providing AV signals with open or closed captioning information. The system includes a speech-to-text processing system coupled to a signal separation processor and a signal combination processor for providing automated captioning for video broadcasts contained in AV signals. The method includes separating an audio signal from an AV signal, converting the audio signal to text data, encoding the original AV signal with the converted text data to produce a captioned AV signal and recording and displaying the captioned AV signal. The system may be mobile and portable and may be used in a classroom environment for producing recorded captioned lectures and used for broadcasting live, captioned lectures. Further, the system may automatically translate spoken words in a first language into words in a second language and include the translated words in the captioning information.

47 citations


Network Information
Related Topics (5)
Feature vector
48.8K papers, 954.4K citations
83% related
Object detection
46.1K papers, 1.3M citations
82% related
Convolutional neural network
74.7K papers, 2M citations
82% related
Deep learning
79.8K papers, 2.1M citations
82% related
Unsupervised learning
22.7K papers, 1M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023536
20221,030
2021504
2020530
2019448
2018334