scispace - formally typeset
K

Kai-Wei Chang

Researcher at University of California, Los Angeles

Publications -  262
Citations -  23031

Kai-Wei Chang is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Computer science & Word embedding. The author has an hindex of 42, co-authored 183 publications receiving 17271 citations. Previous affiliations of Kai-Wei Chang include Boston University & Amazon.com.

Papers
More filters
Proceedings ArticleDOI

Iterative Scaling and Coordinate Descent Methods for Maximum Entropy

TL;DR: This paper creates a general and unified framework for IS methods, this framework also connects IS and coordinate descent (CD) methods, and develops a CD method for Maxent.
Proceedings ArticleDOI

Gender Bias in Multilingual Embeddings and Cross-Lingual Transfer

TL;DR: This article studied gender bias in multilingual embeddings and how it affects transfer learning for NLP applications and provided recommendations for using the multilingual word representations for downstream tasks, and proposed several ways for quantifying bias in multi-language representations from both the intrinsic and extrinsic perspectives.
Proceedings ArticleDOI

Generating Syntactically Controlled Paraphrases without Using Annotated Parallel Pairs.

TL;DR: This article proposed Syntactically controlled paraphrase generator (SynPG), an encoder-decoder based model that learns to disentangle the semantics and the syntax of a sentence from a collection of unannotated texts.
Journal ArticleDOI

Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal

TL;DR: A novel approach to mitigate gender disparity in text generation by learning a fair model during knowledge distillation is presented and two modifications based on counterfactual role reversal are proposed—modifying teacher probabilities and augmenting the training set.
Posted Content

Learning to Search for Dependencies.

TL;DR: A dependency parser can be built using a credit assignment compiler which removes the burden of worrying about low-level machine learning details from the parser implementation, while avoiding various downsides including randomization, extra feature requirements, and custom learning algorithms.