scispace - formally typeset
K

Kai-Wei Chang

Researcher at University of California, Los Angeles

Publications -  262
Citations -  23031

Kai-Wei Chang is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Computer science & Word embedding. The author has an hindex of 42, co-authored 183 publications receiving 17271 citations. Previous affiliations of Kai-Wei Chang include Boston University & Amazon.com.

Papers
More filters
Posted Content

Learning Bilingual Word Embeddings Using Lexical Definitions

TL;DR: This paper proposed BilLex, which leverages pub-licly available lexical definitions for bilingual word embedding learning. But without the need of predefined seed lexicons, BilLex comprises a novel word pairing strategy to automati-cally identify and propagate the precise fine-grained word alignment from lexical defini-tions.
Journal ArticleDOI

MiniSUPERB: Lightweight Benchmark for Self-supervised Speech Models

TL;DR: The MiniSUPERB as mentioned in this paper is a lightweight benchmark that efficiently evaluates self-supervised learning speech models with comparable results to SUPERB while greatly reducing the computational cost, achieving 0.954 and 0.982 Spearman's rank correlation with SUPERB Paper and SUPERB Challenge.
Posted Content

Robust Text Classifier on Test-Time Budgets

TL;DR: A generic framework for learning a robust text classification model that achieves high accuracy under different selection budgets at test-time is designed and a data aggregation method is proposed to train the classifier, allowing it to achieve competitive performance on fractured sentences.
Posted Content

Multi-task Learning for Universal Sentence Representations: What Syntactic and Semantic Information is Captured?

TL;DR: The quantitative analysis of the syntactic and semantic information captured by the sentence embeddings show that multi-task learning captures better syntactic information while the single task learning summarizes the semantic information coherently.
Journal ArticleDOI

Red Teaming Language Model Detectors with Language Models

TL;DR: In this article , the authors systematically test the reliability of the existing machine-generated text detection algorithms by designing two types of attack strategies to fool the detectors: replacing words with their synonyms based on the context; and altering the writing style of generated text.