L
Li Wei
Researcher at Google
Publications - 10
Citations - 738
Li Wei is an academic researcher from Google. The author has contributed to research in topics: Computer science & Recommender system. The author has an hindex of 4, co-authored 6 publications receiving 378 citations.
Papers
More filters
Proceedings ArticleDOI
Fairness in Recommendation Ranking through Pairwise Comparisons
Alex Beutel,Jilin Chen,Tulsee Doshi,Hai Qian,Li Wei,Yi Wu,Lukasz Andrzej Heldt,Zhe Zhao,Lichan Hong,Ed H. Chi,Cristos Goodrow +10 more
TL;DR: In this paper, the authors proposed a set of novel metrics for evaluating algorithmic fairness concerns in recommender systems and proposed a new regularizer to encourage improving this metric during model training and thus improve fairness in the resulting rankings.
Proceedings ArticleDOI
Recommending what video to watch next: a multitask ranking system
Zhe Zhao,Lichan Hong,Li Wei,Jilin Chen,Aniruddh Nath,Shawn Andrews,Aditee Kumthekar,Maheswaran Sathiamoorthy,Xinyang Yi,Ed H. Chi +9 more
TL;DR: This paper introduces a large scale multi-objective ranking system for recommending what video to watch next on an industrial video sharing platform and explored a variety of soft-parameter sharing techniques such as Multi-gate Mixture-of-Experts to efficiently optimize for multiple ranking objectives.
Proceedings ArticleDOI
Sampling-bias-corrected neural modeling for large corpus item recommendations
Xinyang Yi,Ji Yang,Lichan Hong,Derek Zhiyuan Cheng,Lukasz Andrzej Heldt,Aditee Kumthekar,Zhe Zhao,Li Wei,Ed H. Chi +8 more
TL;DR: A novel algorithm for estimating item frequency from streaming data that can work without requiring fixed item vocabulary, and is capable of producing unbiased estimation and being adaptive to item distribution change.
Posted Content
Fairness in Recommendation Ranking through Pairwise Comparisons
Alex Beutel,Jilin Chen,Tulsee Doshi,Hai Qian,Li Wei,Yi Wu,Lukasz Andrzej Heldt,Zhe Zhao,Lichan Hong,Ed H. Chi,Cristos Goodrow +10 more
TL;DR: This paper offers a set of novel metrics for evaluating algorithmic fairness concerns in recommender systems and shows how measuring fairness based on pairwise comparisons from randomized experiments provides a tractable means to reason about fairness in rankings fromRecommender systems.
Proceedings ArticleDOI
Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective
Flavien Prost,Pranjal Awasthi,Nick Blumm,Aditee Kumthekar,Trevor Potter,Li Wei,Xuezhi Wang,Ed H. Chi,Jilin Chen,Alex Beutel +9 more
TL;DR: In this paper, the authors study the problem of measuring the fairness of a machine learning model under noisy information, focusing on group fairness metrics, and present a theoretical analysis that aims to characterize weaker conditions under which accurate fairness evaluation is possible.