scispace - formally typeset
M

Min-Hung Chen

Researcher at Georgia Institute of Technology

Publications -  31
Citations -  783

Min-Hung Chen is an academic researcher from Georgia Institute of Technology. The author has contributed to research in topics: Domain (software engineering) & Computer science. The author has an hindex of 9, co-authored 24 publications receiving 457 citations. Previous affiliations of Min-Hung Chen include Academia Sinica & National Taiwan University.

Papers
More filters
Journal ArticleDOI

TS-LSTM and temporal-inception: Exploiting spatiotemporal dynamics for activity recognition

TL;DR: In this article, a baseline two-stream convolutional neural network (2-stream ConvNet) with LSTM and Temporal Segment RNN (TSRNN) with Inception-style Temporal-ConvNet was used to extract spatiotemporal information.
Posted Content

TS-LSTM and Temporal-Inception: Exploiting Spatiotemporal Dynamics for Activity Recognition

TL;DR: It is demonstrated that using both RNNs (using LSTMs) and Temporal-ConvNets on spatiotemporal feature matrices are able to exploit spatiotmporal dynamics to improve the overall performance.
Proceedings ArticleDOI

Temporal Attentive Alignment for Large-Scale Video Domain Adaptation

TL;DR: In this paper, a temporal attentive adversarial adaptation network (TA3N) is proposed to explicitly attend to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on four video DA datasets (e.g., UCF-HMDB-full and Kinetics-Gameplay).
Proceedings ArticleDOI

Action Segmentation With Joint Self-Supervised Temporal Domain Adaptation

TL;DR: SelfSupervised Temporal Domain Adaptation (SSTDA), which contains two self-supervised auxiliary tasks (binary and sequential domain prediction) to jointly align cross-domain feature spaces embedded with local and global temporal dynamics, achieving better performance than other Domainadaptation (DA) approaches.
Posted Content

Temporal Attentive Alignment for Large-Scale Video Domain Adaptation

TL;DR: This work proposes Temporal Attentive Adversarial Adaptation Network (TA3N), which explicitly attends to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on four video DA datasets.