scispace - formally typeset
C

Cewu Lu

Researcher at Shanghai Jiao Tong University

Publications -  230
Citations -  14121

Cewu Lu is an academic researcher from Shanghai Jiao Tong University. The author has contributed to research in topics: Computer science & Pose. The author has an hindex of 39, co-authored 184 publications receiving 8804 citations. Previous affiliations of Cewu Lu include The Chinese University of Hong Kong & Microsoft.

Papers
More filters
Proceedings ArticleDOI

RMPE: Regional Multi-person Pose Estimation

TL;DR: In this paper, a regional multi-person pose estimation (RMPE) framework is proposed to facilitate pose estimation in the presence of inaccurate human bounding boxes, which achieves state-of-the-art performance on the MPII dataset.
Proceedings ArticleDOI

Abnormal Event Detection at 150 FPS in MATLAB

TL;DR: An efficient sparse combination learning framework based on inherent redundancy of video structures achieves decent performance in the detection phase without compromising result quality and reaches high detection rates on benchmark datasets at a speed of 140-150 frames per second on average.
Journal ArticleDOI

A scalable active framework for region annotation in 3D shape collections

TL;DR: This work proposes a novel active learning method capable of enriching massive geometric datasets with accurate semantic region annotations, and demonstrates that incorporating verification of all produced labelings within this unified objective improves both accuracy and efficiency of the active learning procedure.
Proceedings ArticleDOI

Image smoothing via L0 gradient minimization

TL;DR: This work presents a new image editing method, particularly effective for sharpening major edges by increasing the steepness of transition while eliminating a manageable degree of low-amplitude structures in an optimization framework making use of L0 gradient minimization.
Book ChapterDOI

Visual Relationship Detection with Language Priors

TL;DR: In this article, the authors propose a model that uses this insight to train visual models for objects and predicates individually and later combines them together to predict multiple relationships per image and localize the objects in the predicted relationships as bounding boxes in the image.