scispace - formally typeset
H

Helena de Almeida Maia

Researcher at State University of Campinas

Publications -  19
Citations -  78

Helena de Almeida Maia is an academic researcher from State University of Campinas. The author has contributed to research in topics: Computer science & Convolutional neural network. The author has an hindex of 4, co-authored 14 publications receiving 43 citations. Previous affiliations of Helena de Almeida Maia include Universidade Federal de Juiz de Fora.

Papers
More filters
Proceedings ArticleDOI

Multi-stream Convolutional Neural Networks for Action Recognition in Video Sequences Based on Adaptive Visual Rhythms

TL;DR: A multi-stream network is the architecture of choice to incorporate temporal information, since it may benefit from pre-trained deep networks for images and from handcrafted features for initialization, and its training cost is usually lower than video-based networks.
Journal ArticleDOI

Survey on Digital Video Stabilization: Concepts, Methods, and Challenges

TL;DR: A thorough review of the literature for video stabilization, organized according to a proposed taxonomy, and a formal definition for the problem is introduced, along with a brief interpretation in physical terms.
Journal Article

A video tensor self-descriptor based on variable size block matching

TL;DR: A different and simple approach for video description using only block matching vectors, considering that most works on the field are based on the gradient of image intensities, which is suitable when the time response is a major application issue.
Book ChapterDOI

Human action recognition using convolutional neural networks with symmetric time extension of visual rhythms

TL;DR: This work proposes the usage of multiple Visual Rhythm crops, symmetrically extended in time and separated by a fixed stride, which provide a 2D representation of the video volume matching the fixed input size of the 2D Convolutional Neural Network employed.
Book ChapterDOI

A Video Tensor Self-descriptor Based on Block Matching

TL;DR: A new motion descriptor which uses only block matching vectors to obtain orientation tensors and to generate the final descriptor, considered a self-descriptor, since it depends only on the input video.