scispace - formally typeset
Y

Yaser Sheikh

Researcher at Facebook

Publications -  180
Citations -  26313

Yaser Sheikh is an academic researcher from Facebook. The author has contributed to research in topics: Rendering (computer graphics) & Motion capture. The author has an hindex of 50, co-authored 172 publications receiving 19264 citations. Previous affiliations of Yaser Sheikh include Toyota Motor Engineering & Manufacturing North America & Carnegie Mellon University.

Papers
More filters
Proceedings ArticleDOI

Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields

TL;DR: Part Affinity Fields (PAFs) as discussed by the authors uses a nonparametric representation to learn to associate body parts with individuals in the image and achieves state-of-the-art performance on the MPII Multi-Person benchmark.
Posted Content

Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields

TL;DR: This work presents an approach to efficiently detect the 2D pose of multiple people in an image using a nonparametric representation, which it refers to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image.
Journal ArticleDOI

OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields

TL;DR: OpenPose as mentioned in this paper uses Part Affinity Fields (PAFs) to learn to associate body parts with individuals in the image, which achieves high accuracy and real-time performance.
Proceedings ArticleDOI

Convolutional Pose Machines

TL;DR: In this paper, a convolutional network is incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation, which can implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation.
Posted Content

OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields

TL;DR: OpenPose is released, the first open-source realtime system for multi-person 2D pose detection, including body, foot, hand, and facial keypoints, and the first combined body and foot keypoint detector, based on an internal annotated foot dataset.