scispace - formally typeset
K

Kai-Wei Chang

Researcher at University of California, Los Angeles

Publications -  262
Citations -  23031

Kai-Wei Chang is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Computer science & Word embedding. The author has an hindex of 42, co-authored 183 publications receiving 17271 citations. Previous affiliations of Kai-Wei Chang include Boston University & Amazon.com.

Papers
More filters
Proceedings ArticleDOI

Examining Gender Bias in Languages with Grammatical Gender

TL;DR: Experiments on modified Word Embedding Association Test, word similarity, word translation, and word pair translation tasks show that the proposed approaches can effectively reduce the gender bias while preserving the utility of the original embeddings.
Proceedings ArticleDOI

Learning to Discriminate Perturbations for Blocking Adversarial Attacks in Text Classification

TL;DR: A novel framework, learning to discriminate perturbation (DISP), to identify and adjust malicious perturbations, thereby blocking adversarial attacks for text classification models and shows the robustness of DISP across different situations.
Proceedings ArticleDOI

BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation.

TL;DR: The Bias in Open-Ended Language Generation Dataset (BOLD) as mentioned in this paper is a large-scale dataset that consists of 23,679 English text generation prompts for bias benchmarking across five domains: profession, gender, race, religion and political ideology.
Posted Content

Towards Controllable Biases in Language Generation

TL;DR: This paper developed a method to induce societal biases in generated text when input prompts contain mentions of specific demographic groups, and analyzed two scenarios: 1) inducing negative biases for one demographic and positive biases for another demographic, and 2) equalizing biases between demographics.
Posted Content

On Difficulties of Cross-Lingual Transfer with Order Differences: A Case Study on Dependency Parsing

TL;DR: Investigating crosslingual transfer and posit that an orderagnostic model will perform better when transferring to distant foreign languages shows that RNN-based architectures transfer well to languages that are close to English, while self-attentive models have better overall cross-lingualtransferability and perform especially well on distant languages.