Lattice Long Short-Term Memory for Human Action Recognition
Lin Sun,Kui Jia,Kevin Chen,Dit-Yan Yeung,Bertram E. Shi,Silvio Savarese +5 more
- pp 2166-2175
Reads0
Chats0
TLDR
Lattice-LSTM (L2STM) as discussed by the authors extends LSTM by learning independent hidden state transitions of memory cells for individual spatial locations, which effectively enhances the ability to model dynamics across time and addresses the nonstationary issue of long-term motion dynamics without significantly increasing the model complexity.Abstract:
Human actions captured in video sequences are threedimensional signals characterizing visual appearance and motion dynamics. To learn action patterns, existing methods adopt Convolutional and/or Recurrent Neural Networks (CNNs and RNNs). CNN based methods are effective in learning spatial appearances, but are limited in modeling long-term motion dynamics. RNNs, especially Long Short- Term Memory (LSTM), are able to learn temporal motion dynamics. However, naively applying RNNs to video sequences in a convolutional manner implicitly assumes that motions in videos are stationary across different spatial locations. This assumption is valid for short-term motions but invalid when the duration of the motion is long.,,In this work, we propose Lattice-LSTM (L2STM), which extends LSTM by learning independent hidden state transitions of memory cells for individual spatial locations. This method effectively enhances the ability to model dynamics across time and addresses the non-stationary issue of long-term motion dynamics without significantly increasing the model complexity. Additionally, we introduce a novel multi-modal training procedure for training our network. Unlike traditional two-stream architectures which use RGB and optical flow information as input, our two-stream model leverages both modalities to jointly train both input gates and both forget gates in the network rather than treating the two streams as separate entities with no information about the other. We apply this end-to-end system to benchmark datasets (UCF-101 and HMDB-51) of human action recognition. Experiments show that on both datasets, our proposed method outperforms all existing ones that are based on LSTM and/or CNNs of similar model complexities.read more
Citations
More filters
Texture-Based Input Feature Selection for Action Recognition
TL;DR: Zhang et al. as discussed by the authors proposed a human parsing model (HP model) which jointly conducts dense correspondence labelling and semantic part segmentation to improve the robustness of action recognition.
Journal ArticleDOI
Intelligent Video Analytics for Human Action Recognition: The State of Knowledge
Marek Kulbacki,Jakub Segen,Zenon Chaczko,Jerzy W. Rozenblit,Ryszard Klempous,Konrad Wojciechowski +5 more
TL;DR: In this article , a comprehensive overview of intelligent video analytics and human action recognition methods is presented, including various techniques such as posebased, tracking-based, spatio-temporal, and deep learning-based approaches, including visual transformers.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.