Journal ArticleDOI
Trustworthy Graph Neural Networks: Aspects, Methods and Trends
Reads0
Chats0
TLDR
A comprehensive roadmap to build trustworthy GNNs from the view of the various computing technologies involved is proposed, including robustness, explainability, privacy, fairness, accountability, and environmental well-being.Abstract:
Graph neural networks (GNNs) have emerged as a series of competent graph learning methods for diverse real-world scenarios, ranging from daily applications like recommendation systems and question answering to cutting-edge technologies such as drug discovery in life sciences and n-body simulation in astrophysics. However, task performance is not the only requirement for GNNs. Performance-oriented GNNs have exhibited potential adverse effects like vulnerability to adversarial attacks, unexplainable discrimination against disadvantaged groups, or excessive resource consumption in edge computing environments. To avoid these unintentional harms, it is necessary to build competent GNNs characterised by trustworthiness. To this end, we propose a comprehensive roadmap to build trustworthy GNNs from the view of the various computing technologies involved. In this survey, we introduce basic concepts and comprehensively summarise existing efforts for trustworthy GNNs from six aspects, including robustness, explainability, privacy, fairness, accountability, and environmental well-being. Additionally, we highlight the intricate cross-aspect relations between the above six aspects of trustworthy GNNs. Finally, we present a thorough overview of trending directions for facilitating the research and industrialisation of trustworthy GNNs.read more
Citations
More filters
Proceedings ArticleDOI
Rethinking and Scaling Up Graph Contrastive Learning: An Extremely Efficient Approach with Group Discrimination
TL;DR:
Journal ArticleDOI
Multivariate Time Series Forecasting with Dynamic Graph Neural ODEs
TL;DR: A continuous model to forecast Multivariate Time series with dynamic Graph neural Ordinary Differential Equations (MTGODE) is proposed, allowing deeper graph propagation and fine-grained temporal information aggregation to characterize stable and precise latent spatial-temporal dynamics.
Journal ArticleDOI
Beyond Smoothing: Unsupervised Graph Representation Learning with Edge Heterophily Discriminating
TL;DR: In this paper , an unsupervised graph representation learning method with edge hEterophily discriminaTing (GREET) is proposed, which learns representations by discriminating and leveraging homophilic and heterophilic edges.
Journal ArticleDOI
Trustworthy Recommender Systems
TL;DR: An overview of TRSs is provided, including a discussion of the motivation and basic concepts of T RSs, a presentation of the challenges in building TRSS, and a perspective on the future directions in this area.
Proceedings ArticleDOI
Unifying Graph Contrastive Learning with Flexible Contextual Scopes
TL;DR: The architecture of UGCL can be considered as a general framework to unify existing GCL methods and optimises a very simple contrastive loss function for graph representation learning.
References
More filters
Proceedings ArticleDOI
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TL;DR: In this article, the authors propose LIME, a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem.
Journal ArticleDOI
No free lunch theorems for optimization
TL;DR: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving and a number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class.
Journal ArticleDOI
Network Motifs: Simple Building Blocks of Complex Networks
TL;DR: Network motifs, patterns of interconnections occurring in complex networks at numbers that are significantly higher than those in randomized networks, are defined and may define universal classes of networks.
Posted Content
Communication-Efficient Learning of Deep Networks from Decentralized Data
TL;DR: This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
Posted Content
Explaining and Harnessing Adversarial Examples
TL;DR: The authors argue that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, which is supported by new quantitative results while giving the first explanation of the most intriguing fact about adversarial examples: their generalization across architectures and training sets.