scispace - formally typeset
S

Sanghyun Hong

Researcher at University of Maryland, College Park

Publications -  30
Citations -  611

Sanghyun Hong is an academic researcher from University of Maryland, College Park. The author has contributed to research in topics: Computer science & Deep learning. The author has an hindex of 9, co-authored 21 publications receiving 329 citations. Previous affiliations of Sanghyun Hong include University of North Carolina at Chapel Hill & Oregon State University.

Papers
More filters
Proceedings Article

Shallow-Deep Networks: Understanding and Mitigating Network Overthinking

TL;DR: The Shallow-Deep Network (SDN) is proposed, a generic modification to off-the-shelf DNNs that introduces internal classifiers that can mitigate the wasteful effect of overthinking with confidence-based early exits and reduce the average inference cost by more than 50% and preserve the accuracy.
Posted Content

Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks

TL;DR: The impact of an exemplary hardware fault attack, Rowhammer, is studied to show that a Rowhammer enabled attacker co-located in the same physical machine can inflict significant accuracy drops even with single bit-flip corruptions and no knowledge of the model.
Posted Content

On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping

TL;DR: This work studies the feasibility of an attack-agnostic defense relying on artifacts that are common to all poisoning attacks, and proposes the prerequisite for a generic poisoning defense: it must bound gradient magnitudes and minimize differences in orientation.
Proceedings ArticleDOI

Terminal brain damage: exposing the graceless degradation in deep neural networks under hardware fault attacks

TL;DR: In this paper, the authors study the effects of bitwise corruptions on deep neural networks (DNNs) and show that most models have at least one parameter that, after a specific bit-flip in their bitwise representation, causes an accuracy loss of over 90%.
Posted Content

Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks

TL;DR: This paper presents the first in-depth security analysis of DNN fingerprinting attacks that exploit cache side-channels, and demonstrates that an attacker can accurately reconstruct two complex networks having observed only one forward propagation.