scispace - formally typeset
C

Changjian Shui

Researcher at Laval University

Publications -  33
Citations -  351

Changjian Shui is an academic researcher from Laval University. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 6, co-authored 20 publications receiving 117 citations.

Papers
More filters
Proceedings Article

Deep Active Learning: Unified and Principled Method for Query and Training

TL;DR: A unified and principled method for both the querying and training processes in deep batch active learning is proposed, providing theoretical insights from the intuition of modeling the interactive procedure in active learning as distribution matching by adopting the Wasserstein distance.
Proceedings ArticleDOI

A Principled Approach for Learning Task Similarity in Multitask Learning.

TL;DR: An upper bound on the generalization error of multitask learning is provided, showing the benefit of explicit and implicit task similarity knowledge, and a new training algorithm is proposed to learn the task relation coefficients and neural network parameters iteratively.
Posted Content

Domain Generalization with Optimal Transport and Metric Learning.

TL;DR: This work tackles the domain generalization problem to learn from multiple source domains and generalize to a target domain with unknown statistics and adopts optimal transport with Wasserstein distance, which could constrain the class label similarity, for adversarial training and also further deploy a metric learning objective to leverage the label information for achieving distinguishable classification boundary.
Journal ArticleDOI

Common Spatial Pattern Reformulated for Regularizations in Brain–Computer Interfaces

TL;DR: The CSP is reformulates as a constrained minimization problem and the equivalence of the reformulated and the original CSPs is established, which validate the efficiency and effectiveness of the proposed CSP formulation in different learning contexts.
Journal ArticleDOI

Domain generalization via optimal transport with metric similarity learning

TL;DR: This work tackles the domain generalization problem to learn from multiple source domains and generalize to a target domain with unknown statistics and adopts optimal transport with Wasserstein distance, which could constrain the class label similarity, for adversarial training and also further deploy a metric learning objective to leverage the label information for achieving distinguishable classification boundary.