W
Weihua Hu
Researcher at Stanford University
Publications - 35
Citations - 7600
Weihua Hu is an academic researcher from Stanford University. The author has contributed to research in topics: Supervised learning & Deep learning. The author has an hindex of 21, co-authored 34 publications receiving 4636 citations. Previous affiliations of Weihua Hu include University of Tokyo.
Papers
More filters
Proceedings Article
How Powerful are Graph Neural Networks
TL;DR: In this paper, the expressive power of GNNs to capture different graph structures is analyzed and a simple architecture for graph representation learning is proposed. But the results characterize the discriminative power of popular GNN variants and show that they cannot learn to distinguish certain simple graph structures.
Posted Content
Open Graph Benchmark: Datasets for Machine Learning on Graphs
Weihua Hu,Matthias Fey,Marinka Zitnik,Yuxiao Dong,Hongyu Ren,Bowen Liu,Michele Catasta,Jure Leskovec +7 more
TL;DR: The OGB datasets are large-scale, encompass multiple important graph ML tasks, and cover a diverse range of domains, ranging from social and information networks to biological networks, molecular graphs, source code ASTs, and knowledge graphs, indicating fruitful opportunities for future research.
Posted Content
Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels
TL;DR: Co-teaching as discussed by the authors trains two deep neural networks simultaneously, and let them teach each other given every mini-batch: first, each network feeds forward all data and selects some data of possibly clean labels; secondly, two networks communicate with each other what data in this minibatch should be used for training; finally, each networks back propagates the data selected by its peer network and updates itself.
Proceedings ArticleDOI
Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels
TL;DR: Empirical results on noisy versions of MNIST, CIFar-10 and CIFAR-100 demonstrate that Co-teaching is much superior to the state-of-the-art methods in the robustness of trained deep models.
Posted Content
WILDS: A Benchmark of in-the-Wild Distribution Shifts
Pang Wei Koh,Shiori Sagawa,Henrik Marklund,Sang Michael Xie,Marvin Zhang,Akshay Balsubramani,Weihua Hu,Michihiro Yasunaga,Richard Lanas Phillips,Irena Gao,Tony Lee,Etienne David,Ian Stavness,Wei Guo,Berton A. Earnshaw,Imran S. Haque,Sara Beery,Jure Leskovec,Anshul Kundaje,Emma Pierson,Sergey Levine,Chelsea Finn,Percy Liang +22 more
TL;DR: WILDS is presented, a benchmark of in-the-wild distribution shifts spanning diverse data modalities and applications, and is hoped to encourage the development of general-purpose methods that are anchored to real-world distribution shifts and that work well across different applications and problem settings.