scispace - formally typeset
H

Huy Phan

Researcher at Rutgers University

Publications -  13
Citations -  41

Huy Phan is an academic researcher from Rutgers University. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 1, co-authored 5 publications receiving 6 citations.

Papers
More filters
Journal ArticleDOI

CAG: A Real-Time Low-Cost Enhanced-Robustness High-Transferability Content-Aware Adversarial Attack Generator

TL;DR: A Content-aware Adversarial Attack Generator (CAG) is proposed to achieve real-time, low-cost, enhanced-robustness and high-transferability adversarial attack and significantly reduces the required memory cost for generating adversarial examples.
Journal ArticleDOI

BATUDE: Budget-Aware Neural Network Compression Based on Tucker Decomposition

TL;DR: BATUDE is proposed, a Budget-Aware TUcker DEcomposition-based compression approach that can efficiently calculate optimal tensor ranks via one-shot training that integrating the rank selecting procedure to the DNN training process with a specified compression budget brings very significant improvement on both compression ratio and classification accuracy for the compressed models.
Proceedings ArticleDOI

Invisible and Efficient Backdoor Attacks for Compressed Deep Neural Networks

TL;DR: This paper proposes a universal adversarial perturbation (UAP)-based approach to achieve both high attack stealthiness and high attack efficiency simultaneously, and demonstrates its superior performance compared with the existing solutions.
Proceedings ArticleDOI

Audio-domain position-independent backdoor attack via unnoticeable triggers

TL;DR: This work explores the severity of audio-domain backdoor attacks and demonstrates their feasibility under practical scenarios of voice user interfaces, where an adversary injects an unnoticeable audio trigger into live speech to launch the attack.
Proceedings ArticleDOI

VVSec: Securing Volumetric Video Streaming via Benign Use of Adversarial Perturbation

TL;DR: This work develops a novel volumetric video security mechanism, namely VVSec, which makes benign use of adversarial perturbations to obfuscate the security and privacy-sensitive 3D face models and ensures that the 3D models cannot be exploited to bypass deep learning-based face authentications.