scispace - formally typeset
Journal ArticleDOI

Trustworthy Graph Neural Networks: Aspects, Methods and Trends

He Zhang, +5 more
- 16 May 2022 - 
- Vol. abs/2205.07424
Reads0
Chats0
TLDR
A comprehensive roadmap to build trustworthy GNNs from the view of the various computing technologies involved is proposed, including robustness, explainability, privacy, fairness, accountability, and environmental well-being.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Multivariate Time Series Forecasting with Dynamic Graph Neural ODEs

TL;DR: A continuous model to forecast Multivariate Time series with dynamic Graph neural Ordinary Differential Equations (MTGODE) is proposed, allowing deeper graph propagation and fine-grained temporal information aggregation to characterize stable and precise latent spatial-temporal dynamics.
Journal ArticleDOI

Beyond Smoothing: Unsupervised Graph Representation Learning with Edge Heterophily Discriminating

TL;DR: In this paper , an unsupervised graph representation learning method with edge hEterophily discriminaTing (GREET) is proposed, which learns representations by discriminating and leveraging homophilic and heterophilic edges.
Journal ArticleDOI

Trustworthy Recommender Systems

TL;DR: An overview of TRSs is provided, including a discussion of the motivation and basic concepts of T RSs, a presentation of the challenges in building TRSS, and a perspective on the future directions in this area.
Proceedings ArticleDOI

Unifying Graph Contrastive Learning with Flexible Contextual Scopes

TL;DR: The architecture of UGCL can be considered as a general framework to unify existing GCL methods and optimises a very simple contrastive loss function for graph representation learning.
References
More filters
Proceedings ArticleDOI

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

TL;DR: In this article, the authors propose LIME, a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem.
Journal ArticleDOI

No free lunch theorems for optimization

TL;DR: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving and a number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class.
Journal ArticleDOI

Network Motifs: Simple Building Blocks of Complex Networks

TL;DR: Network motifs, patterns of interconnections occurring in complex networks at numbers that are significantly higher than those in randomized networks, are defined and may define universal classes of networks.
Posted Content

Communication-Efficient Learning of Deep Networks from Decentralized Data

TL;DR: This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
Posted Content

Explaining and Harnessing Adversarial Examples

TL;DR: The authors argue that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, which is supported by new quantitative results while giving the first explanation of the most intriguing fact about adversarial examples: their generalization across architectures and training sets.