ASGN: An Active Semi-supervised Graph Neural Network for Molecular Property Prediction
Hao Zhongkai,Chengqiang Lu,Zheyuan Hu,Hao Wang,Zhenya Huang,Qi Liu,Enhong Chen,Chee Kong Lee +7 more
Reads0
Chats0
TLDR
A novel framework called Active Semi-supervised Graph Neural Network (ASGN) is proposed by incorporating both labeled and unlabeled molecules and adopts a teacher-student framework to learn general representation that jointly exploits information from molecular structure and molecular distribution.Abstract:
Molecular property prediction (e.g., energy) is an essential problem in chemistry and biology. Unfortunately, many supervised learning methods usually suffer from the problem of scarce labeled molecules in the chemical space, where such property labels are generally obtained by Density Functional Theory (DFT) calculation which is extremely computational costly. An effective solution is to incorporate the unlabeled molecules in a semi-supervised fashion. However, learning semi-supervised representation for large amounts of molecules is challenging, including the joint representation issue of both molecular essence and structure, the conflict between representation and property leaning. Here we propose a novel framework called Active Semi-supervised Graph Neural Network (ASGN) by incorporating both labeled and unlabeled molecules. Specifically, ASGN adopts a teacher-student framework. In the teacher model, we propose a novel semi-supervised learning method to learn general representation that jointly exploits information from molecular structure and molecular distribution. Then in the student model, we target at property prediction task to deal with the learning loss conflict. At last, we proposed a novel active learning strategy in terms of molecular diversities to select informative data during the whole framework learning. We conduct extensive experiments on several public datasets. Experimental results show the remarkable performance of our ASGN framework.read more
Citations
More filters
Journal ArticleDOI
A compact review of molecular property prediction with graph neural networks
Oliver Wieder,Stefan M. Kohlbacher,Mélaine Kuenemann,Arthur Garon,Pierre Ducrot,Thomas Seidel,Thierry Langer +6 more
TL;DR: This review tries to structure this highly dynamic field of AI-research by collecting and classifying 80 GNNs that have been used to predict more than 20 molecular properties using 48 different datasets.
Journal ArticleDOI
Artificial intelligence in drug discovery: applications and techniques
TL;DR: In this article, a comprehensive review on AI in drug discovery is presented, which can be reduced to two major tasks, i.e. molecular property prediction and molecule generation, as well as common data resources, molecule representations and benchmark platforms.
Proceedings ArticleDOI
DualGraph: Improving Semi-supervised Graph Classification via Dual Contrastive Learning
TL;DR: This paper proposes DualGraph, a principled framework to leverage unlabeled graphs more effectively for semi-supervised graph classification and improves model training for each module with a contrastive learning framework to encourage the intra-module consistency on unlabeling data.
Journal ArticleDOI
GHNN: Graph Harmonic Neural Networks for semi-supervised graph-level classification
TL;DR: In this article , a graph harmonic neural network (GHNN) is proposed to combine the advantages of both graph convolutional networks and graph kernels to leverage the unlabeled data, and thus overcomes label scarcity in semi-supervised scenarios.
Proceedings ArticleDOI
KGNN: Harnessing Kernel-based Networks for Semi-supervised Graph Classification
TL;DR: This paper proposes the Kernel-based Graph Neural Network (KGNN), a network that consists of a GNN-based network as well as a kernel- based network parameterized by a memory network, and jointly train the two networks by maximizing their agreement on unlabeled graphs via posterior regularization.
References
More filters
Journal ArticleDOI
Self-Consistent Equations Including Exchange and Correlation Effects
Walter Kohn,L. J. Sham +1 more
TL;DR: In this paper, the Hartree and Hartree-Fock equations are applied to a uniform electron gas, where the exchange and correlation portions of the chemical potential of the gas are used as additional effective potentials.
Journal Article
Dropout: a simple way to prevent neural networks from overfitting
TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Posted Content
Distilling the Knowledge in a Neural Network
TL;DR: This work shows that it can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model and introduces a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse.
Posted Content
Inductive Representation Learning on Large Graphs
TL;DR: GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.
Posted Content
Auto-Encoding Variational Bayes
Diederik P. Kingma,Max Welling +1 more
TL;DR: In this paper, a stochastic variational inference and learning algorithm was proposed for directed probabilistic models with intractable posterior distributions and large datasets, which scales to large datasets.