scispace - formally typeset
Open AccessPosted Content

Video Summarization with Long Short-term Memory

TLDR
Long Short-Term Memory (LSTM), a special type of recurrent neural networks are used to model the variable-range dependencies entailed in the task of video summarization to improve summarization by reducing the discrepancies in statistical properties across those datasets.
Abstract
We propose a novel supervised learning technique for summarizing videos by automatically selecting keyframes or key subshots. Casting the problem as a structured prediction problem on sequential data, our main idea is to use Long Short-Term Memory (LSTM), a special type of recurrent neural networks to model the variable-range dependencies entailed in the task of video summarization. Our learning models attain the state-of-the-art results on two benchmark video datasets. Detailed analysis justifies the design of the models. In particular, we show that it is crucial to take into consideration the sequential structures in videos and model them. Besides advances in modeling techniques, we introduce techniques to address the need of a large number of annotated data for training complex learning models. There, our main idea is to exploit the existence of auxiliary annotated video datasets, albeit heterogeneous in visual styles and contents. Specifically, we show domain adaptation techniques can improve summarization by reducing the discrepancies in statistical properties across those datasets.

read more

Citations
More filters
Proceedings ArticleDOI

Unsupervised Video Summarization with Adversarial LSTM Networks

TL;DR: This paper addresses the problem of unsupervised video summarization, formulated as selecting a sparse subset of video frames that optimally represent the input video, with a novel generative adversarial framework.
Proceedings Article

Towards Automatic Learning of Procedures From Web Instructional Videos

TL;DR: This article proposed a segment-level recurrent network for generating procedure segments by modeling the dependencies across segments, which can be used as pre-processing for other tasks, such as dense video captioning and event parsing.
Journal ArticleDOI

Video Summarization With Attention-Based Encoder–Decoder Networks

TL;DR: This paper proposes a novel video summarization framework named attentive encoder–decoder networks forVideo summarization (AVS), in which the encoder uses a bidirectional long short-term memory (BiLSTM) to encode the contextual information among the input video frames.
Posted Content

Deep Reinforcement Learning for Unsupervised Video Summarization with Diversity-Representativeness Reward

TL;DR: In this article, a deep summarization network (DSN) is proposed to generate more diverse and representative video summaries, which does not rely on labels or user interactions at all.
Journal ArticleDOI

Machine Remaining Useful Life Prediction via an Attention-Based Deep Learning Approach

TL;DR: An attention-based deep learning framework is proposed for machine's RUL prediction that is able to learn the importance of features and time steps, and assign larger weights to more important ones, and the proposed approach outperforms the state-of-the-arts.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings ArticleDOI

Speech recognition with deep recurrent neural networks

TL;DR: This paper investigates deep recurrent neural networks, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs.
Proceedings Article

Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

TL;DR: An attention based model that automatically learns to describe the content of images is introduced that can be trained in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound.
Posted Content

Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

TL;DR: This paper proposed an attention-based model that automatically learns to describe the content of images by focusing on salient objects while generating corresponding words in the output sequence, which achieved state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.
Related Papers (5)