scispace - formally typeset
J

James Charles

Researcher at University of Leeds

Publications -  28
Citations -  1426

James Charles is an academic researcher from University of Leeds. The author has contributed to research in topics: Pose & Segmentation. The author has an hindex of 13, co-authored 26 publications receiving 1203 citations. Previous affiliations of James Charles include Bangor University.

Papers
More filters
Proceedings ArticleDOI

Flowing ConvNets for Human Pose Estimation in Videos

TL;DR: This work proposes a ConvNet architecture that is able to benefit from temporal context by combining information across the multiple frames using optical flow and outperforms a number of others, including one that uses optical flow solely at the input layers, one that regresses joint coordinates directly, and one that predicts heatmaps without spatial fusion.
Book ChapterDOI

Deep Convolutional Neural Networks for Efficient Pose Estimation in Gesture Videos

TL;DR: This work is the first to their knowledge to use ConvNets for estimating human pose in videos and introduces a new network that exploits temporal information from multiple frames, leading to better performance.
Posted Content

Flowing ConvNets for Human Pose Estimation in Videos

TL;DR: In this paper, the authors proposed a convolutional neural network (CNN) architecture for human pose estimation in videos, which is able to benefit from temporal context by combining information across the multiple frames using optical flow.
Book ChapterDOI

Domain-Adaptive Discriminative One-Shot Learning of Gestures

TL;DR: The objective of this paper is to recognize gestures in videos – both localizing the gesture and classifying it into one of multiple classes.
Proceedings ArticleDOI

Personalizing Human Video Pose Estimation

TL;DR: A personalized ConvNet pose estimator that automatically adapts itself to the uniqueness of a person's appearance to improve pose estimation in long videos and outperforms the state of the art (including top ConvNet methods) by a large margin on three standard benchmarks, as well as on a new challenging YouTube video dataset.