C
Chenguang Wang
Researcher at University of California, Berkeley
Publications - 38
Citations - 1191
Chenguang Wang is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Cluster analysis. The author has an hindex of 15, co-authored 33 publications receiving 868 citations. Previous affiliations of Chenguang Wang include IBM & Peking University.
Papers
More filters
Proceedings ArticleDOI
Co-Occurrent Features in Semantic Segmentation
TL;DR: This paper builds an Aggregated Co-occurrent Feature (ACF) Module, which learns a fine-grained spatial invariant representation to capture co- occurrent context information across the scene and significantly improves the segmentation results using FCN.
Journal Article
GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing
Jian Guo,Jian Guo,He He,Tong He,Leonard Lausen,Mu Li,Haibin Lin,Xingjian Shi,Chenguang Wang,Junyuan Xie,Sheng Zha,Aston Zhang,Hang Zhang,Zhi Zhang,Zhongyue Zhang,Shuai Zheng,Yi Zhu +16 more
TL;DR: GluonCV and GluonNLP as discussed by the authors are deep learning toolkits for computer vision and natural language processing based on Apache MXNet (incubating), which provide state-of-the-art pre-trained models, training scripts, and training logs.
Proceedings Article
Text classification with heterogeneous information network kernels
TL;DR: A novel text as network classification framework is presented, which introduces a structured and typed heterogeneous information networks (HINs) representation of texts, and a meta-path based approach to link texts that outperforms the state-of-the-art methods and other HIN-kernels.
Posted Content
Language Models with Transformers
TL;DR: This paper explores effective Transformer architectures for language model, including adding additional LSTM layers to better capture the sequential context while still keeping the computation efficient, and proposes Coordinate Architecture Search (CAS) to find an effective architecture through iterative refinement of the model.
Journal Article
Language Models are Open Knowledge Graphs
TL;DR: This paper shows how to construct knowledge graphs (KGs) from pre-trained language models (e.g., BERT, GPT-2/3), without human supervision, and proposes an unsupervised method to cast the knowledge contained within language models into KGs.