Q
Qi Ju
Researcher at Tencent
Publications - 17
Citations - 962
Qi Ju is an academic researcher from Tencent. The author has contributed to research in topics: Computer science & Closed captioning. The author has an hindex of 7, co-authored 12 publications receiving 423 citations.
Papers
More filters
Journal ArticleDOI
K-BERT: Enabling Language Representation with Knowledge Graph
TL;DR: This work proposes a knowledge-enabled language representation model (K-BERT) with knowledge graphs (KGs), in which triples are injected into the sentences as domain knowledge, which significantly outperforms BERT and reveals promising results in twelve NLP tasks.
Proceedings ArticleDOI
FastBERT: a Self-distilling BERT with Adaptive Inference Time
TL;DR: FastBERT as discussed by the authors proposes a self-distillation mechanism at fine-tuning, enabling a greater computational efficacy with minimal loss in performance, and achieves promising results in twelve English and Chinese datasets.
Journal ArticleDOI
Improving Image Captioning with Conditional Generative Adversarial Nets
TL;DR: This article proposed a conditional generative adversarial network (GAN) based image captioning framework as an extension of traditional reinforcement learning (RL)-based encoder-decoder architecture to deal with the inconsistent evaluation problem among different objective language metrics, and designed some discriminator networks to automatically and progressively determine whether generated caption is human described or machine generated.
Proceedings ArticleDOI
UER: An Open-Source Toolkit for Pre-training Models.
Zhe Zhao,Hui Chen,Jinbin Zhang,Xin Zhao,Tao Liu,Wei Lu,Xi Chen,Haotang Deng,Qi Ju,Xiaoyong Du +9 more
TL;DR: This work proposes an assemble-on-demand pre-training toolkit, namely Universal Encoder Representations (UER), which is loosely coupled, and encapsulated with rich modules, and built a model zoo, which contains pre-trained models based on different corpora, encoders, and targets.
Posted Content
FastBERT: a Self-distilling BERT with Adaptive Inference Time
TL;DR: A novel speed-tunable FastBERT with adaptive inference time that is able to speed up by a wide range from 1 to 12 times than BERT if given different speedup thresholds to make a speed-performance tradeoff.