scispace - formally typeset
J

Junsong Yuan

Researcher at University at Buffalo

Publications -  471
Citations -  20391

Junsong Yuan is an academic researcher from University at Buffalo. The author has contributed to research in topics: Computer science & Feature extraction. The author has an hindex of 59, co-authored 401 publications receiving 15651 citations. Previous affiliations of Junsong Yuan include Zhejiang University & Northwestern University.

Papers
More filters
Proceedings ArticleDOI

Spatio-Temporal Naive-Bayes Nearest-Neighbor (ST-NBNN) for Skeleton-Based Action Recognition

TL;DR: By identifying key skeleton joints and temporal stages for each action class, the proposed ST-NBNN can capture the essential spatio-temporal patterns that play key roles of recognizing actions, which is not always achievable by using end-to-end models.
Proceedings ArticleDOI

Temporal Structure Mining for Weakly Supervised Action Detection

TL;DR: The proposed temporal structure mining (TSM) approach, which treats each segment's phase as a hidden variable, uses segments' confidence scores from each phase filter to construct a table and determine hidden variables, i.e., phases of segments, by a maximal circulant path discovery along the table.
Proceedings ArticleDOI

PointCloud Saliency Maps

TL;DR: A novel way of characterizing critical points and segments to build point-cloud saliency maps is proposed, and each saliency score can be efficiently measured by the corresponding gradient of the loss w.r.t the point under the spherical coordinates.
Book ChapterDOI

Learning Progressive Joint Propagation for Human Motion Prediction

TL;DR: A transformer-based architecture with the global attention mechanism is applied, which performs in a central-to-peripheral extension according to the structural connectivity, and a memory-based dictionary is built, which aims to preserve the global motion patterns in training data to guide the predictions.
Proceedings ArticleDOI

Spatial selection for attentional visual tracking

TL;DR: This paper proposes a new visual tracking approach by reflecting some aspects of spatial selective attention, and presents a novel attentional visual tracking (AVT) algorithm that is general, robust and computationally efficient.