scispace - formally typeset
L

Lichen Wang

Researcher at Northeastern University

Publications -  45
Citations -  4695

Lichen Wang is an academic researcher from Northeastern University. The author has contributed to research in topics: Computer science & Feature (machine learning). The author has an hindex of 12, co-authored 35 publications receiving 2618 citations. Previous affiliations of Lichen Wang include Northeastern University (China) & Zebra Technologies.

Papers
More filters
Posted Content

Image Super-Resolution Using Very Deep Residual Channel Attention Networks

TL;DR: This work proposes a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections, and proposes a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels.
Book ChapterDOI

Image Super-Resolution Using Very Deep Residual Channel Attention Networks

TL;DR: Very deep residual channel attention networks (RCAN) as mentioned in this paper proposes a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections Each residual group contains some residual blocks with short skip connections.
Proceedings ArticleDOI

Generative Multi-View Human Action Recognition

TL;DR: This work proposes a Generative Multi-View Action Recognition framework that enhances the model robustness by employing adversarial training, and naturally handles the incomplete view case by imputing the missing data.
Posted Content

PointDAN: A Multi-Scale 3D Domain Adaption Network for Point Cloud Representation

TL;DR: A novel 3D Domain Adaptation Network for point cloud data (PointDAN) is proposed, which jointly aligns the global and local features in multi-level and demonstrates the superiority of the model over the state-of-the-art general-purpose DA methods.
Proceedings ArticleDOI

Skeleton Aware Multi-modal Sign Language Recognition

TL;DR: In this article, a skeleton-aware multi-modal SLR framework was proposed to take advantage of multimodal information towards a higher recognition rate by using a Sign Language Graph Convolutional Network (SL-GCN) and a Separable Spatial-Temporal Convolution Network (SSTCN) to exploit skeleton features.