scispace - formally typeset
Search or ask a question
Topic

Closed captioning

About: Closed captioning is a research topic. Over the lifetime, 3011 publications have been published within this topic receiving 64494 citations. The topic is also known as: CC.


Papers
More filters
Proceedings ArticleDOI
30 Aug 2019
TL;DR: It is shown that vocabulary coherence between words and syntactic paradigm of sentences are also important to generate high-quality image captioning, and the proposed Reflective Decoding Network (RDN) enhances both the long-sequence dependency and position perception of words in a caption decoder.
Abstract: State-of-the-art image captioning methods mostly focus on improving visual features, less attention has been paid to utilizing the inherent properties of language to boost captioning performance. In this paper, we show that vocabulary coherence between words and syntactic paradigm of sentences are also important to generate high-quality image caption. Following the conventional encoder-decoder framework, we propose the Reflective Decoding Network (RDN) for image captioning, which enhances both the long-sequence dependency and position perception of words in a caption decoder. Our model learns to collaboratively attend on both visual and textual features and meanwhile perceive each word's relative position in the sentence to maximize the information delivered in the generated caption. We evaluate the effectiveness of our RDN on the COCO image captioning datasets and achieve superior performance over the previous methods. Further experiments reveal that our approach is particularly advantageous for hard cases with complex scenes to describe by captions.

52 citations

Book ChapterDOI
08 Sep 2018
TL;DR: The authors propose to augment paragraph generation techniques with coherence vectors, global topic vectors, and modeling of the inherent ambiguity of associating paragraphs with images via a variational auto-encoder formulation.
Abstract: Paragraph generation from images, which has gained popularity recently, is an important task for video summarization, editing, and support of the disabled Traditional image captioning methods fall short on this front, since they aren’t designed to generate long informative descriptions Moreover, the vanilla approach of simply concatenating multiple short sentences, possibly synthesized from a classical image captioning system, doesn’t embrace the intricacies of paragraphs: coherent sentences, globally consistent structure, and diversity To address those challenges, we propose to augment paragraph generation techniques with “coherence vectors,” “global topic vectors,” and modeling of the inherent ambiguity of associating paragraphs with images, via a variational auto-encoder formulation We demonstrate the effectiveness of the developed approach on two datasets, outperforming existing state-of-the-art techniques on both

52 citations

Journal ArticleDOI
TL;DR: A novel sound active attention framework is proposed for more specific caption generation according to the interest of the observer and can generate sentences that can capture the focus of human.
Abstract: Attention mechanism-based image captioning methods have achieved good results in the remote sensing field, but are driven by tagged sentences, which is called passive attention. However, different observers may give different levels of attention to the same image. The attention of observers during testing, then, may not be consistent with the attention during training. As a direct and natural human–machine interaction, speech is much faster than typing sentences. Sound can represent the attention of different observers. This is called active attention. Active attention can be more targeted to describe the image; for example, in disaster assessments, the situation can be obtained quickly and the corresponding disaster areas can be located related to the specific disaster. A novel sound active attention framework is proposed for more specific caption generation according to the interest of the observer. First, sound is modeled by mel-frequency cepstral coefficients (MFCCs) and the image is encoded by convolutional neural networks (CNNs). Then, to handle the continuity characteristic of sound, a sound module and an attention module are designed based on the gated recurrent units (GRUs). Finally, the sound-guided image feature processed by the attention module is imported into the output module to generate descriptive sentence. Experiments based on both fake and real sound data sets show that the proposed method can generate sentences that can capture the focus of human.

52 citations

Posted Content
TL;DR: This paper shows how audio and speech modalities may improve a dense video captioning model and applies automatic speech recognition system to obtain a temporally aligned textual description of the speech and treats it as a separate input alongside video frames and the corresponding audio track.
Abstract: Dense video captioning is a task of localizing interesting events from an untrimmed video and producing textual description (captions) for each localized event. Most of the previous works in dense video captioning are solely based on visual information and completely ignore the audio track. However, audio, and speech, in particular, are vital cues for a human observer in understanding an environment. In this paper, we present a new dense video captioning approach that is able to utilize any number of modalities for event description. Specifically, we show how audio and speech modalities may improve a dense video captioning model. We apply automatic speech recognition (ASR) system to obtain a temporally aligned textual description of the speech (similar to subtitles) and treat it as a separate input alongside video frames and the corresponding audio track. We formulate the captioning task as a machine translation problem and utilize recently proposed Transformer architecture to convert multi-modal input data into textual descriptions. We demonstrate the performance of our model on ActivityNet Captions dataset. The ablation studies indicate a considerable contribution from audio and speech components suggesting that these modalities contain substantial complementary information to video frames. Furthermore, we provide an in-depth analysis of the ActivityNet Caption results by leveraging the category tags obtained from original YouTube videos. Code is publicly available: this http URL

52 citations

Posted Content
TL;DR: Wang et al. as discussed by the authors proposed a hierarchical LSTM with adjusted temporal attention (hLSTMat) approach for video captioning, which utilizes the temporal attention for selecting specific frames to predict the related words.
Abstract: Recent progress has been made in using attention based encoder-decoder framework for video captioning. However, most existing decoders apply the attention mechanism to every generated word including both visual words (e.g., "gun" and "shooting") and non-visual words (e.g. "the", "a"). However, these non-visual words can be easily predicted using natural language model without considering visual signals or attention. Imposing attention mechanism on non-visual words could mislead and decrease the overall performance of video captioning. To address this issue, we propose a hierarchical LSTM with adjusted temporal attention (hLSTMat) approach for video captioning. Specifically, the proposed framework utilizes the temporal attention for selecting specific frames to predict the related words, while the adjusted temporal attention is for deciding whether to depend on the visual information or the language context information. Also, a hierarchical LSTMs is designed to simultaneously consider both low-level visual information and high-level language context information to support the video caption generation. To demonstrate the effectiveness of our proposed framework, we test our method on two prevalent datasets: MSVD and MSR-VTT, and experimental results show that our approach outperforms the state-of-the-art methods on both two datasets.

52 citations


Network Information
Related Topics (5)
Feature vector
48.8K papers, 954.4K citations
83% related
Object detection
46.1K papers, 1.3M citations
82% related
Convolutional neural network
74.7K papers, 2M citations
82% related
Deep learning
79.8K papers, 2.1M citations
82% related
Unsupervised learning
22.7K papers, 1M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023536
20221,030
2021504
2020530
2019448
2018334