Y
Yuting Ning
Researcher at University of Science and Technology of China
Publications - 8
Citations - 43
Yuting Ning is an academic researcher from University of Science and Technology of China. The author has contributed to research in topics: Computer science & User modeling. The author has an hindex of 1, co-authored 1 publications receiving 2 citations.
Papers
More filters
Proceedings ArticleDOI
Hierarchical Personalized Federated Learning for User Modeling
TL;DR: Huang et al. as mentioned in this paper proposed a novel client-server architecture framework, namely Hierarchical Personalized Federated Learning (HPFL), to serve federated learning in user modeling with inconsistent clients.
Journal ArticleDOI
High-efficiency control of pesticide and heavy metal combined pollution in paddy soil using biochar/g-C3N4 photoresponsive soil remediation agent
Hantong Qie,Meng Ren,Chang You,Xuedan Cui,Xiao Tan,Yuting Ning,Meng Liu,Daibing Hou,Aijun Lin,Junpu Cui +9 more
TL;DR: In this paper , an alkali-modified biochar (BCNaOH)/graphitic carbon nitride (g-C3N4) photoresponsive soil remediation agent was synthesized to control the pesticide and heavy metal compound pollution in paddy fields.
Journal ArticleDOI
A Novel Approach for Auto-Formulation of Optimization Problems
Yuting Ning,Jia-Yin Liu,Longhu Qin,Tong Xiao,Shan Xue,Zhenya Huang,Qi Li,Enhong Chen,Jinze Wu +8 more
TL;DR: In the NL4Opt NeurIPS 2022 competition, the winning team as discussed by the authors treated subtask 1 as a named entity recognition (NER) problem with the solution pipeline including preprocessing methods, adversarial training, post-processing methods and ensemble learning.
Journal ArticleDOI
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
Yan Zhuang,Qi Li,Yuting Ning,Wei Huang,Rui Lv,Zhenya Huang,Zheng Zhang,Qingyang Mao,Shijin Wang,Enhong Chen +9 more
TL;DR: The authors proposed an adaptive testing framework for large language models (LLMs) evaluation, which dynamically adjusts the characteristics of the test questions, such as difficulty, based on the model's performance.
Journal ArticleDOI
Towards a Holistic Understanding of Mathematical Questions with Contrastive Pre-training
TL;DR: The authors propose a contrastive pre-training approach for mathematical question representations, namely QuesCo, which attempts to bring questions with more similar purposes closer by adding two-level question augmentations, including contentlevel and structure-level, which generate literally diverse question pairs with similar purposes.