scispace - formally typeset
S

Saeed Ghorbani

Researcher at York University

Publications -  12
Citations -  155

Saeed Ghorbani is an academic researcher from York University. The author has contributed to research in topics: Computer science & Motion capture. The author has an hindex of 4, co-authored 10 publications receiving 57 citations. Previous affiliations of Saeed Ghorbani include Sharif University of Technology.

Papers
More filters
Journal ArticleDOI

MoVi: A large multi-purpose human motion and video dataset.

TL;DR: This multimodal dataset contains 9 hours of optical motion capture data, 17 hours of video data from 4 different points of view recorded by stationary and hand-held cameras, and 6.6 hours of inertial measurement units data recorded from 60 female and 30 male actors performing a collection of 21 everyday actions and sports movements.
Posted Content

Gait Recognition using Multi-Scale Partial Representation Transformation with Capsules

TL;DR: A novel deep network, learning to transfer multi-scale partial gait representations using capsules to obtain more discriminative gait features that are more robust to both viewing and appearance changes is proposed.
Journal ArticleDOI

Probabilistic character motion synthesis using a hierarchical deep latent variable model

TL;DR: In this paper, a hierarchical recurrent model was proposed to generate realistic character animations based on weak control signals, such that the synthesized motions are realistic while retaining the stochastic nature of human movement.
Proceedings ArticleDOI

In-Bed Pressure-Based Pose Estimation Using Image Space Representation Learning

TL;DR: This paper presents a novel end-to-end framework capable of accurately locating body parts from vague pressure data by exploiting the idea of equip-ping an off-the-shelf pose estimator with a deep trainable neural network, which pre-processes and prepares the pressure data for subsequent pose estimation.
Proceedings ArticleDOI

Gait Recognition using Multi-Scale Partial Representation Transformation with Capsules

TL;DR: In this article, a capsule network is adopted to learn deeper part-whole relationships and assign more weights to the more relevant features while ignoring the spurious dimensions, obtaining final features that are more robust to both viewing and appearance changes.