scispace - formally typeset
J

Jianbo Yang

Researcher at Duke University

Publications -  33
Citations -  2470

Jianbo Yang is an academic researcher from Duke University. The author has contributed to research in topics: Feature (computer vision) & Mixture model. The author has an hindex of 17, co-authored 30 publications receiving 2057 citations. Previous affiliations of Jianbo Yang include Institute for Infocomm Research Singapore & Durham University.

Papers
More filters
Proceedings Article

Deep convolutional neural networks on multichannel time series for human activity recognition

TL;DR: This method adopts a deep convolutional neural networks (CNN) to automate feature learning from the raw inputs in a systematic way and makes it outperform other HAR algorithms, as verified in the experiments on the Opportunity Activity Recognition Challenge and other benchmark datasets.
Journal ArticleDOI

Coded aperture compressive temporal imaging

TL;DR: This work uses mechanical translation of a coded aperture for code division multiple access compression of video to discuss the compressed video's temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.
Journal ArticleDOI

Video compressive sensing using Gaussian mixture models.

TL;DR: The efficacy of the proposed Gaussian mixture model (GMM)-based inversion method is demonstrated with videos reconstructed from simulated compressive video measurements, and from a realCompressive video camera.
Journal ArticleDOI

Compressive Sensing by Learning a Gaussian Mixture Model From Measurements

TL;DR: This work derives a maximum marginal likelihood estimator (MMLE) that maximizes the likelihood of the GMM of the underlying signals given only their linear compressive measurements, and extends it to a GMM with dominantly low-rank covariance matrices, to gain computational speedup.
Proceedings ArticleDOI

Low-Cost Compressive Sensing for Color Video and Depth

TL;DR: A simple and inexpensive modification is made to a conventional off-the-shelf color video camera, from which the recovered frames can be focused at a different depth, and fast recovery is achieved by an anytime algorithm exploiting the group-sparsity of wavelet/DCT coefficients.