scispace - formally typeset
D

Dawn Song

Researcher at University of California, Berkeley

Publications -  504
Citations -  75245

Dawn Song is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Deep learning. The author has an hindex of 117, co-authored 460 publications receiving 61572 citations. Previous affiliations of Dawn Song include FireEye, Inc. & University of California.

Papers
More filters
Proceedings ArticleDOI

Robust Physical-World Attacks on Deep Learning Visual Classification

TL;DR: This work proposes a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints.
Proceedings Article

Dynamic Taint Analysis for Automatic Detection, Analysis, and Signature Generation of Exploits on Commodity Software

TL;DR: TaintCheck as mentioned in this paper performs dynamic taint analysis by performing binary rewriting at run time, which can reliably detect most types of exploits and produces no false positives for any of the many different programs that were tested.
Proceedings ArticleDOI

The Sybil attack in sensor networks: analysis & defenses

TL;DR: It is demonstrated that the Sybil attack can be exceedingly detrimental to many important functions of the sensor network such as routing, resource allocation, misbehavior detection, etc.
Proceedings ArticleDOI

Android permissions demystified

TL;DR: Stowaway, a tool that detects overprivilege in compiled Android applications, is built and finds that about one-third of applications are overprivileged.
Proceedings Article

Delving into Transferable Adversarial Examples and Black-box Attacks

TL;DR: This work is the first to conduct an extensive study of the transferability over large models and a large scale dataset, and it is also theFirst to study the transferabilities of targeted adversarial examples with their target labels.