scispace - formally typeset
D

Dianqi Li

Researcher at University of Washington

Publications -  19
Citations -  2158

Dianqi Li is an academic researcher from University of Washington. The author has contributed to research in topics: Language model & Rank (computer programming). The author has an hindex of 9, co-authored 17 publications receiving 1611 citations. Previous affiliations of Dianqi Li include University of Science and Technology of China & Microsoft.

Papers
More filters
Journal ArticleDOI

Partially oxidized atomic cobalt layers for carbon dioxide electroreduction to liquid fuel

TL;DR: In this paper, the role of the two different catalytic sites of pure cobalt and coexisting domains of cobalt metal and cobalt oxide has been evaluated, showing that surface cobalt atoms of the atomically thin layers have higher intrinsic activity and selectivity towards formate production, at lower overpotentials.
Proceedings Article

Adversarial ranking for language generation

TL;DR: This paper proposes a novel generative adversarial network, RankGAN, for generating high-quality language descriptions by viewing a set of data samples collectively and evaluating their quality through relative ranking scores, which helps to make better assessment which in turn helps to learn a better generator.
Proceedings ArticleDOI

Contextualized Perturbation for Textual Adversarial Attack

TL;DR: CLARE is a ContextuaLized AdversaRial Example generation model that produces fluent and grammatical outputs through a mask-then-infill procedure that can flexibly combine and apply perturbations at any position in the inputs, and is thus able to attack the victim model more effectively with fewer edits.
Posted Content

Adversarial Ranking for Language Generation.

TL;DR: RankGAN as discussed by the authors proposes to analyze and rank a collection of human-written and machine-written sentences by giving a reference group, which helps the discriminator to make better assessment which helps to learn a better generator.
Posted Content

Generating Diverse and Accurate Visual Captions by Comparative Adversarial Learning.

TL;DR: A novel conditional generative adversarial network is proposed for generating diverse captions across images that effectively exploits the inherent characteristics of human languages, and generates more discriminative captions.