scispace - formally typeset
K

Kai-Wei Chang

Researcher at University of California, Los Angeles

Publications -  262
Citations -  23031

Kai-Wei Chang is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Computer science & Word embedding. The author has an hindex of 42, co-authored 183 publications receiving 17271 citations. Previous affiliations of Kai-Wei Chang include Boston University & Amazon.com.

Papers
More filters
Proceedings ArticleDOI

SentiBERT: A Transferable Transformer-Based Architecture for Compositional Sentiment Semantics

TL;DR: SentiBERT is better than baseline approaches in capturing negation and the contrastive relation and model the compositional sentiment semantics, and can be transferred to other sentiment analysis tasks as well as related tasks, such as emotion classification tasks.
Proceedings Article

Multi-Relational Latent Semantic Analysis

TL;DR: It is demonstrated that by integrating multiple relations from both homogeneous and heterogeneous information sources, MRLSA achieves state-of-the-art performance on existing benchmark datasets for two relations, antonymy and is-a.
Posted Content

Co-training Embeddings of Knowledge Graphs and Entity Descriptions for Cross-lingual Entity Alignment

TL;DR: This paper introduces an embedding-based approach which leverages a weakly aligned multilingual KG for semi-supervised cross-lingual learning using entity descriptions and shows that the performance of the proposed approach on the entity alignment task improves at each iteration of co-training, and eventually reaches a stage at which it significantly surpasses previous approaches.
Proceedings ArticleDOI

The Illinois-Columbia System in the CoNLL-2014 Shared Task

TL;DR: This paper describes the Illinois-Columbia system that participated in the CoNLL-2014 shared task and describes the novel aspects of the system, which ranked second on the original annotations and first on the revised annotations.
Proceedings ArticleDOI

Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering

TL;DR: This work designs language models to learn to generate lectures and explanations as the chain of thought (CoT) to mimic the multi-hop reasoning process when answering S CIENCE QA questions and explores the upper bound of GPT-3 and shows that CoT helps language models learn from fewer data.