K
Kai-Wei Chang
Researcher at University of California, Los Angeles
Publications - 262
Citations - 23031
Kai-Wei Chang is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Computer science & Word embedding. The author has an hindex of 42, co-authored 183 publications receiving 17271 citations. Previous affiliations of Kai-Wei Chang include Boston University & Amazon.com.
Papers
More filters
Posted Content
Unified Pre-training for Program Understanding and Generation
TL;DR: PLBART as discussed by the authors is a sequence-to-sequence model capable of performing a broad spectrum of program and language understanding and generation tasks, including code summarization, code generation, and code translation.
Posted Content
Multi-task Learning for Universal Sentence Embeddings: A Thorough Evaluation using Transfer and Auxiliary Tasks
TL;DR: This paper shows that joint learning of multiple tasks results in better generalizable sentence representations by conducting extensive experiments and analysis comparing the multi-task and single-task learned sentence encoders.
Proceedings ArticleDOI
Representation Learning for Resource-Constrained Keyphrase Generation
TL;DR: A data-oriented approach that learns salient information using unsupervised corpus-level statistics, and then learns a task-specific intermediate representation based on a pre-trained language model for keyphrase generation is designed.
Posted Content
Weakly-supervised VisualBERT: Pre-training without Parallel Images and Captions
TL;DR: This work proposes Weakly-supervised VisualBERT with the key idea of conducting "mask-and-predict" pre-training on language-only and image-only corpora, and introduces the object tags detected by an object recognition model as anchor points to bridge two modalities.
Proceedings ArticleDOI
Controllable Text Generation with Neurally-Decomposed Oracle
TL;DR: A general and efficient framework to control auto-regressive generation models with NeurAlly-Decomposed Oracle (NADO), and presents the closed-form optimal solution to incorporate the token-level guidance into the base model for controllable generation.