C
Chen Jason Zhang
Researcher at Hong Kong University of Science and Technology
Publications - 125
Citations - 240
Chen Jason Zhang is an academic researcher from Hong Kong University of Science and Technology. The author has contributed to research in topics: Computer science & Chemistry. The author has an hindex of 1, co-authored 2 publications receiving 6 citations.
Papers
More filters
Journal ArticleDOI
Oxygen vacancy engineering of novel ultrathin Bi12O17Br2 nanosheets for boosting photocatalytic N2 reduction.
Kaiyue Gao,Chen Jason Zhang,Yi Zhang,Xiaoyu Zhou,Shuo Gu,Kehua Zhang,Xiufang Wang,Xiaojie Song +7 more
Journal ArticleDOI
Interpretable Pneumonia Detection by Combining Deep Learning and Explainable Models With Multisource Data
Hao Ren,Aslan B. Wong,Wanmin Lian,Weibin Cheng,Ying Zhang,Jianwei He,Qingfeng Liu,Jiasheng Yang,Chen Jason Zhang,Kaishun Wu,Haodi Zhang +10 more
TL;DR: Wang et al. as discussed by the authors built a large dataset of community-acquired pneumonia consisting of 35389 cases (distinguished from nosocomial pneumonia) based on actual medical records and trained a prediction model with the chest X-ray images in their dataset, capable of precisely detecting pneumonia.
Proceedings Article
ROLLER: Fast and Efficient Tensor Compilation for Deep Learning
Hongyu Zhu,Ruofan Wu,Yijia Diao,Shanbin Ke,Haoyu Li,Chen Jason Zhang,Jilong Xue,Lingxiao Ma,Yuqing Xia,Wei Cui,Fan Yang,Mao Yang,Lidong Zhou,Asaf Cidon,Gennady Pekhimenko +14 more
TL;DR: ROLLER is presented, which takes a different construction-based approach to generate kernels, using rTile, a new tile abstraction that encapsulates tensor shapes that align with the key features of the underlying accelerator, thus achieving efficient execution by limiting the shape choices.
Book ChapterDOI
The Tenth Visual Object Tracking VOT2022 Challenge Results
Matej Kristan,Ales Leonardis,Jiří Matas,Michael Felsberg,Roman Pflugfelder,Joni-Kristian Kamarainen,Hyung Jin Chang,Martin Danelljan,Luka Čehovin Zajc,Alan Lukezic,Ondrej Drbohlav,Johanna Björklund,Yushan Zhang,Zhongqun Zhang,Song Yan,Wenyan Yang,Dingding Cai,Christoph Mayer,Gustavo Fernandez,Kang Ben,Goutam Bhat,Hongyan Chang,Guang Chen,Jiaye Chen,Shengyong Chen,Xilin Chen,Xin Chen,Xiuyi Chen,Yiwei Chen,Yu Hsi Chen,Zhixing Chen,Yang-Xue Cheng,Angelo Ciaramella,Yutao Cui,Benjamin Dzubur,Mohana Murali Dasari,Qili Deng,Debajyoti Dhar,Shangzhe Di,Emanuel Di Nardo,Daniel K. Du,Matteo Dunnhofer,Heng Fan,Zhen-Hua Feng,Zhihong Fu,Shan Gao,Rama Krishna Sai Subrahmanyam Gorthi,Eric Granger,Qi Guo,Himanshu Gupta,Jianfeng He,Kejia He,Yan Huang,Deepak Jangid,Rongrong Ji,Cheng Jiang,Ying Yan Jiang,Felix Järemo Lawin,Ze Kang,Mariam Kiran,Josef Kittler,S Lai,Xiangyuan Lan,Dongwook Lee,Hyunjeong Lee,Seohyung Lee,Hui Mei Li,Ming Qiang Li,Wang Li,Xi Hua Li,Xianxian Li,Xiao Kui Li,Zhe Li,Liting Lin,Haibin Ling,Bo Liu,Chang Liu,Si Ning Liu,Huchuan Lu,Rafael M. O. Cruz,Bingpeng Ma,Chao Ma,Jie Ma,Yinchao Ma,Niki Martinel,Alireza Memarmoghadam,Christian Micheloni,Payman Moallem,Le Thanh Nguyen-Meidine,Siyang Pan,Changbeom Park,Danda Pani Paudel,Matthieu Paul,Houwen Peng,Andreas Robinson,Litu Rout,Shiguang Shan,Kristian Simonato,Tian Tian Song,Xiaoning Song,Chao Sun,Jingna Sun,Zhangyong Tang,Radu Timofte,Chi-Yi Tsai,Luc Van Gool,Om Prakash Verma,Dong Wang,Fei Wang,Liangliang Wang,Liangliang Wang,Lijun Wang,Limin Wang,Qiang Wang,Gangshan Wu,Jianlin Wu,Xiaojun Wu,Feifei Xie,Tianyang Xu,Wei Xu,Yongfeng Xu,Yuan Xu,Wanli Xue,Zizheng Xun,Bin Yan,Da-Ming Yang,Jinyu Yang,Wankou Yang,Xiaoyun Yang,Yi Yang,Yi-chao Yang,Zongxin Yang,Botao Ye,Fisher Yu,Hongyuan Yu,Jiaqian Yu,Q.X. Yu,Weichen Yu,Kang Ze,Jiangsu Zhai,Chen Jason Zhang,Chunhu Zhang,Kaihua Zhang,Tianzhu Zhang,Wenkang Zhang,Zhibin Zhang,Zhipeng Zhang,Jie Zhao,Shaochuan Zhao,Feng Zheng,Haixia Zheng,Mingzhu Zheng,Bineng Zhong,Jiawen Zhu,Xue-Feng Zhu,Yueting Zhuang +155 more
TL;DR: The Visual Object Tracking challenge VOT2022 as mentioned in this paper was composed of seven sub-challenges focusing on different tracking domains: (i) VOT-STs2022 challenge focused on short-term tracking in RGB by segmentation, (ii) VOTE-STb2022 challenging was focused on real-time short-time tracking by bounding boxes, (iii) VODE-RTb2021 challenge was concerned with segmentation of RGB and depth-only images, and (iv) as mentioned in this paper focused on long-term longterm tracking by coping with target disappearance and reappearance.
Proceedings ArticleDOI
Multi-Stage and Multi-Loss Training for Fullband Non-Personalized and Personalized Speech Enhancement
Lianwu Chen,Cheng-Kang Xu,Xu Zhang,Xinlei Ren,Xiguang Zheng,Chen Jason Zhang,Liang Guo,Bin Yu +7 more
TL;DR: This work further extends the existing wideband systems to enable full-band (48kHz) speech enhancement while simultaneously ensuring automatic speech recognition compatibility and optionally, personalized speech enhancement by employing a multi-stage and multi-loss training architecture that incorporates the recently proposed two-step structure.