Open AccessPosted Content
OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields
Reads0
Chats0
TLDR
OpenPose is released, the first open-source realtime system for multi-person 2D pose detection, including body, foot, hand, and facial keypoints, and the first combined body and foot keypoint detector, based on an internal annotated foot dataset.Abstract:
Realtime multi-person 2D pose estimation is a key component in enabling machines to have an understanding of people in images and videos. In this work, we present a realtime approach to detect the 2D pose of multiple people in an image. The proposed method uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. This bottom-up system achieves high accuracy and realtime performance, regardless of the number of people in the image. In previous work, PAFs and body part location estimation were refined simultaneously across training stages. We demonstrate that a PAF-only refinement rather than both PAF and body part location refinement results in a substantial increase in both runtime performance and accuracy. We also present the first combined body and foot keypoint detector, based on an internal annotated foot dataset that we have publicly released. We show that the combined detector not only reduces the inference time compared to running them sequentially, but also maintains the accuracy of each component individually. This work has culminated in the release of OpenPose, the first open-source realtime system for multi-person 2D pose detection, including body, foot, hand, and facial keypoints.read more
Citations
More filters
Journal ArticleDOI
A study on attention-based LSTM for abnormal behavior recognition with variable pooling
TL;DR: A novel framework for behavior recognition is proposed which is based on spatio-temporal convolution and attention-based LSTM (ST-CNN & ATT-LSTM), and the results indicate that the framework is superior to existing methods.
Journal ArticleDOI
Speeding up the detection of non-iconic and iconic gestures (SPUDNIG): A toolkit for the automatic detection of hand movements and gestures in video data
TL;DR: SPUDNIG will be highly relevant for researchers studying multimodal communication due to its annotations significantly accelerating the analysis of large video corpora and its output can directly be imported into commonly used annotation tools such as ELAN and ANVIL.
Journal ArticleDOI
Quantitative Gait Analysis Using a Pose-Estimation Algorithm with a Single 2D-Video of Parkinson’s Disease Patients
Jung Hwan Shin,Ri Yu,Jed Noel Ong,Chan Young Lee,Seung Ho Jeon,Hwanpil Park,Han Joon Kim,Jehee Lee,Beomseok Jeon +8 more
TL;DR: Wang et al. as mentioned in this paper proposed a 2D video-based tracking method to evaluate gait in Parkinson's disease patients. But, their method is limited in accessibility and requires the patient to upload a video of their entire walking sequence.
Journal ArticleDOI
Development of ‘ibuki’ an electrically actuated childlike android with mobility and its potential in the future society
Yoshihiro Nakata,Satoshi Yagi,Shiqi Yu,Yifei Wang,Naoki Ise,Yutaka Nakamura,Hiroshi Ishiguro +6 more
TL;DR: An electrically driven childlike android named ibuki equipped with a wheeled mobility unit that enables it to move in a real environment and can replicate the movements of the human center of mass and can express human-like upper-body movements even when moving by wheels.
Proceedings ArticleDOI
Motion-Based Educational Games: Using Multi-Modal Data to Predict Player’s Performance
TL;DR: This work used machine learning techniques and MMD derived from player’s game-play to identify the MMD features that drive rapid highly accurate predictions of players’ academic performance in educational MBTGs and emphasise the significance of using MMD for real-time performance prediction in educationalMBTG.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Book ChapterDOI
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,James Hays,Pietro Perona,Deva Ramanan,Piotr Dollár,C. Lawrence Zitnick +7 more
TL;DR: A new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding by gathering images of complex everyday scenes containing common objects in their natural context.
Proceedings ArticleDOI
Densely Connected Convolutional Networks
TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.