scispace - formally typeset
J

Ji Gao

Researcher at University of Virginia

Publications -  17
Citations -  747

Ji Gao is an academic researcher from University of Virginia. The author has contributed to research in topics: Deep learning & Robustness (computer science). The author has an hindex of 8, co-authored 16 publications receiving 482 citations.

Papers
More filters
Proceedings ArticleDOI

Black-Box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers

TL;DR: DeepWordBug as mentioned in this paper generates small text perturbations in a black-box setting that force a deep-learning classifier to misclassify a text input by scoring strategies to find the most important words to modify.
Proceedings Article

DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples

TL;DR: DeepCloak as mentioned in this paper identifies and removes unnecessary features in a DNN model to limit the capacity an attacker can use to generate adversarial samples and therefore increase the robustness against such inputs.
Posted Content

DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples

TL;DR: DeepCloak as mentioned in this paper identifies and removes unnecessary features in a DNN model to limit the capacity an attacker can use to generate adversarial samples and therefore increase the robustness against such inputs.
Posted Content

A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples

Beilun Wang, +2 more
- 01 Dec 2016 - 
TL;DR: This paper investigates the topological relationship between two (pseudo)metric spaces corresponding to predictor and oracle and develops necessary and sufficient conditions that can determine if a classifier is always robust (strong-robust) against adversarial examples according to f_2.
Proceedings Article

A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Samples

TL;DR: In this paper, a theoretical framework for analyzing learning-based classifiers, especially deep neural networks (DNN), in the face of adversarial perturbation (AP) is proposed.