scispace - formally typeset
Search or ask a question
Author

Edel B. Garcia

Bio: Edel B. Garcia is an academic researcher. The author has contributed to research in topics: Graph (abstract data type) & Gait (human). The author has an hindex of 3, co-authored 4 publications receiving 109 citations.

Papers
More filters
Book ChapterDOI
28 Oct 2017
TL;DR: This work proposes a novel pose-based gait recognition approach that is more robust to the clothing and carrying variations, and a pose- based temporal-spatial network (PTSN) is proposed to extract the temporal- Spatial features, which effectively improve the performance of gait Recognition.
Abstract: One of the most attractive biometric techniques is gait recognition, since its potential for human identification at a distance. But gait recognition is still challenging in real applications due to the effect of many variations on the appearance and shape. Usually, appearance-based methods need to compute gait energy image (GEI) which is extracted from the human silhouettes. GEI is an image that is obtained by averaging the silhouettes and as result the temporal information is removed. The body joints are invariant to changing clothing and carrying conditions. We propose a novel pose-based gait recognition approach that is more robust to the clothing and carrying variations. At the same time, a pose-based temporal-spatial network (PTSN) is proposed to extract the temporal-spatial features, which effectively improve the performance of gait recognition. Experiments evaluated on the challenging CASIA B dataset, show that our method achieves state-of-the-art performance in both carrying and clothing conditions.

140 citations

Journal ArticleDOI
TL;DR: This work proposes a method called GaitGANv2 which is based on generative adversarial networks (GAN) and contains two discriminators, respectively called fake/real discriminator and identification discriminator, which ensures that the generated gait images are realistic and the second one maintains the human identity information.

66 citations

Book ChapterDOI
26 Nov 2019
TL;DR: The proposed method to extract discriminative feature from video sequences in the spatial and temporal domains by only one network, Spatial-Temporal Graph Attention Network (STGAN), designed to select some distinguished regions and enhance their contribution.
Abstract: Gait is a kind of attractive feature for human identification at a distance. It can be regarded as a kind of temporal signal. At the same time the human body shape can be regarded as the signal in the spatial domain. In the proposed method, we try to extract discriminative feature from video sequences in the spatial and temporal domains by only one network, Spatial-Temporal Graph Attention Network (STGAN). In spatial domain, we designed one branch to select some distinguished regions and enhance their contribution. It can make the network focus on these distinguished regions. We also constructed another branch, a Spatial-Temporal Graph (STG), to discover the relationship between frames and the variation of a region in temporal domain. The proposed method can extract gait feature in the two domains, and the two branches in the model can be trained end to end. The experimental results on two popular datasets, CASIA-B and OU-ISIR Treadmill-B, show the proposed method can improve gait recognition obviously.

9 citations

Book ChapterDOI
28 Oct 2017
TL;DR: A method based on Windowed Dynamic Mode Decomposition to enhance the texture of body parts on the Gait Energy Image that are not affected by the clothing and carrying condition variations, in order to improve the gait recognition accuracy under these kinds of variations.
Abstract: In this paper, we introduce a method based on Windowed Dynamic Mode Decomposition to enhance the texture of body parts on the Gait Energy Image that are not affected by the clothing and carrying condition variations, in order to improve the gait recognition accuracy under these kinds of variations. We obtain the best accurracy (\(71.37 \%\)) for large carrying condition variations reported in the literature for CASIA-B dataset. Unlike the deep learning based approaches the proposal method is simple and does not need training.

3 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: PoseGait exploits human 3D pose estimated from images by Convolutional Neural Network as the input feature for gait recognition and design spatio-temporal features from the3D pose to improve the recognition rate.

243 citations

Journal ArticleDOI
17 Jul 2019
TL;DR: GaitSet as discussed by the authors proposes a new network named GaitSet to learn identity information from the set of independent frames, which is immune to permutation of frames, and can naturally integrate frames from different videos which have been filmed under different scenarios, such as diverse viewing angles, different clothes/carrying conditions.
Abstract: As a unique biometric feature that can be recognized at a distance, gait has broad applications in crime prevention, forensic identification and social security. To portray a gait, existing gait recognition methods utilize either a gait template, where temporal information is hard to preserve, or a gait sequence, which must keep unnecessary sequential constraints and thus loses the flexibility of gait recognition. In this paper we present a novel perspective, where a gait is regarded as a set consisting of independent frames. We propose a new network named GaitSet to learn identity information from the set. Based on the set perspective, our method is immune to permutation of frames, and can naturally integrate frames from different videos which have been filmed under different scenarios, such as diverse viewing angles, different clothes/carrying conditions. Experiments show that under normal walking conditions, our single-model method achieves an average rank-1 accuracy of 95.0% on the CASIA-B gait dataset and an 87.1% accuracy on the OU-MVLP gait dataset. These results represent new state-of-the-art recognition accuracy. On various complex scenarios, our model exhibits a significant level of robustness. It achieves accuracies of 87.2% and 70.4% on CASIA-B under bag-carrying and coat-wearing walking conditions, respectively. These outperform the existing best methods by a large margin. The method presented can also achieve a satisfactory accuracy with a small number of frames in a test sample, e.g., 82.5% on CASIA-B with only 7 frames. The source code has been released at https://github.com/AbnerHqC/GaitSet.

236 citations

Proceedings ArticleDOI
14 Jun 2020
TL;DR: Focal Convolution Layer, a new applying of convolution, is presented to enhance the fine-grained learning of the part-level spatial features and the Micro-motion Capture Module is proposed, which is a novel way of temporal modeling for gait task, which focuses on the short-range temporal features rather than the redundant long-range features for cycle gait.
Abstract: Gait recognition, applied to identify individual walking patterns in a long-distance, is one of the most promising video-based biometric technologies. At present, most gait recognition methods take the whole human body as a unit to establish the spatio-temporal representations. However, we have observed that different parts of human body possess evidently various visual appearances and movement patterns during walking. In the latest literature, employing partial features for human body description has been verified being beneficial to individual recognition. Taken above insights together, we assume that each part of human body needs its own spatio-temporal expression. Then, we propose a novel part-based model GaitPart and get two aspects effect of boosting the performance: On the one hand, Focal Convolution Layer, a new applying of convolution, is presented to enhance the fine-grained learning of the part-level spatial features. On the other hand, the Micro-motion Capture Module (MCM) is proposed and there are several parallel MCMs in the GaitPart corresponding to the pre-defined parts of the human body, respectively. It is worth mentioning that the MCM is a novel way of temporal modeling for gait task, which focuses on the short-range temporal features rather than the redundant long-range features for cycle gait. Experiments on two of the most popular public datasets, CASIA-B and OU-MVLP, richly exemplified that our method meets a new state-of-the-art on multiple standard benchmarks. The source code will be available on https://github.com/ChaoFan96/GaitPart.

222 citations

Journal ArticleDOI
TL;DR: A new multi-channel gait template, called period energy image (PEI), and multi-task generative adversarial networks (MGANs), which can leverage adversarial training to extract more discriminative features from gait sequences.
Abstract: Gait recognition is of great importance in the fields of surveillance and forensics to identify human beings since gait is the unique biometric feature that can be perceived efficiently at a distance. However, the accuracy of gait recognition to some extent suffers from both the variation of view angles and the deficient gait templates. On one hand, the existing cross-view methods focus on transforming gait templates among different views, which may accumulate the transformation error in a large variation of view angles. On the other hand, a commonly used gait energy image template loses temporal information of a gait sequence. To address these problems, this paper proposes multi-task generative adversarial networks (MGANs) for learning view-specific feature representations. In order to preserve more temporal information, we also propose a new multi-channel gait template, called period energy image (PEI). Based on the assumption of view angle manifold, the MGANs can leverage adversarial training to extract more discriminative features from gait sequences. Experiments on OU-ISIR, CASIA-B, and USF benchmark data sets indicate that compared with several recently published approaches, PEI + MGANs achieves competitive performance and is more interpretable to cross-view gait recognition.

181 citations

Journal ArticleDOI
TL;DR: A general framework to design VAEs suitable for fitting incomplete heterogenous data, which includes likelihood models for real-valued, positive real valued, interval, categorical, ordinal and count data, and allows accurate estimation of missing data is proposed.

177 citations