scispace - formally typeset
S

Shuji Hao

Researcher at Institute of High Performance Computing Singapore

Publications -  18
Citations -  2138

Shuji Hao is an academic researcher from Institute of High Performance Computing Singapore. The author has contributed to research in topics: Active learning (machine learning) & Active learning. The author has an hindex of 10, co-authored 17 publications receiving 1378 citations. Previous affiliations of Shuji Hao include Agency for Science, Technology and Research & Beijing Normal University.

Papers
More filters
Journal ArticleDOI

Deep learning for sensor-based activity recognition: A survey

TL;DR: The recent advance of deep learning based sensor-based activity recognition is surveyed from three aspects: sensor modality, deep model, and application and detailed insights on existing work are presented and grand challenges for future research are proposed.
Proceedings ArticleDOI

Balanced Distribution Adaptation for Transfer Learning

TL;DR: Zhang et al. as mentioned in this paper proposed a balanced distribution adaptation (BDA) method to adaptively leverage the importance of the marginal and conditional distribution discrepancies, and several existing methods can be treated as special cases of BDA.
Posted Content

Balanced Distribution Adaptation for Transfer Learning

TL;DR: This paper proposes a novel transfer learning approach, named as Balanced Distribution Adaptation (BDA), which can adaptively leverage the importance of the marginal and conditional distribution discrepancies, and several existing methods can be treated as special cases of BDA.
Proceedings Article

An Optimal Control Approach to Deep Learning and Applications to Discrete-Weight Neural Networks

TL;DR: In this article, a discrete-time method of successive approximations (MSA) is proposed to train neural networks with weights that are constrained to take values in a discrete set.
Posted Content

An Optimal Control Approach to Deep Learning and Applications to Discrete-Weight Neural Networks

TL;DR: In this article, a discrete-time method of successive approximations (MSA) is proposed to train neural networks with weights that are constrained to take values in a discrete set.