scispace - formally typeset
Q

Qiwei Ye

Researcher at Microsoft

Publications -  18
Citations -  5210

Qiwei Ye is an academic researcher from Microsoft. The author has contributed to research in topics: Reinforcement learning & Tree (data structure). The author has an hindex of 6, co-authored 16 publications receiving 2585 citations. Previous affiliations of Qiwei Ye include Shanghai Jiao Tong University.

Papers
More filters
Proceedings Article

LightGBM: a highly efficient gradient boosting decision tree

TL;DR: It is proved that, since the data instances with larger gradients play a more important role in the computation of information gain, GOSS can obtain quite accurate estimation of the information gain with a much smaller data size, and is called LightGBM.
Posted Content

Suphx: Mastering Mahjong with Deep Reinforcement Learning

TL;DR: An AI for Mahjong is designed, named Suphx, based on deep reinforcement learning with some newly introduced techniques including global reward prediction, oracle guiding, and run-time policy adaptation, which is the first time that a computer program outperforms most top human players in Mahjong.
Proceedings Article

A Communication-Efficient Parallel Algorithm for Decision Tree

TL;DR: Parallel Voting Decision Tree (PV-Tree) as discussed by the authors performs both local voting and global voting in each iteration by partitioning the training data onto a number of machines, and then the indices of these top attributes are aggregated by a server, and the globally top-$2k$ attributes are determined by a majority voting among these local candidates.
Posted Content

A Communication-Efficient Parallel Algorithm for Decision Tree

TL;DR: Experiments on real-world datasets show that PV-Tree significantly outperforms the existing parallel decision tree algorithms in the tradeoff between accuracy and efficiency.
Proceedings Article

G-SGD: Optimizing ReLU Neural Networks in its Positively Scale-Invariant Space.

TL;DR: A formal study on the positive scaling operators which forms a transformation group, denoted as G, and it is proved that the value of a path in the neural network is invariant to positive scaling and the value vector of all the paths is sufficient to represent the neural networks under mild conditions.