scispace - formally typeset
H

Huijun Wu

Researcher at University of New South Wales

Publications -  15
Citations -  434

Huijun Wu is an academic researcher from University of New South Wales. The author has contributed to research in topics: Interpretability & Cloud computing. The author has an hindex of 5, co-authored 15 publications receiving 265 citations. Previous affiliations of Huijun Wu include National University of Defense Technology & Commonwealth Scientific and Industrial Research Organisation.

Papers
More filters
Proceedings ArticleDOI

Adversarial Examples for Graph Data: Deep Insights into Attack and Defense.

TL;DR: This paper proposes both attack and defense techniques for adversarial attacks and shows that the discreteness problem could easily be resolved by introducing integrated gradients which could accurately reflect the effect of perturbing certain features or edges while still benefiting from the parallel computations.
Posted Content

Adversarial Examples on Graph Data: Deep Insights into Attack and Defense

TL;DR: In this article, the authors proposed both attack and defense techniques for graph convolutional networks (GCN) and showed that the discreteness problem could easily be resolved by introducing integrated gradients which could accurately reflect the effect of perturbing certain features or edges.
Journal Article

HPDedup: A Hybrid Prioritized Data Deduplication Mechanism for Primary Storage in the Cloud

TL;DR: This paper presents HPDedup, a Hybrid Prioritized data Deduplication mechanism to deal with the storage system shared by applications running in co-located virtual machines or containers by fusing an inline and a post-processing process for exact deduplication.
Journal ArticleDOI

A Differentiated Caching Mechanism to Enable Primary Storage Deduplication in Clouds

TL;DR: A novel fingerprint caching mechanism that estimates the temporal locality of duplicates in different data streams and prioritizes the cache allocation based on the estimation is proposed and results show that the proposed mechanism provides significant improvement for both deduplication ratio and overhead reduction.
Proceedings ArticleDOI

Sharing Deep Neural Network Models with Interpretation

TL;DR: This paper proposes a method to disclose a small set of training data that is just sufficient for users to get the insight into a complicated model and shows that data point pairs in the tree give users significantly better understanding of the model decision boundaries and paves the way for trustworthy model sharing.