scispace - formally typeset
S

Shiqi Wang

Researcher at Columbia University

Publications -  32
Citations -  1635

Shiqi Wang is an academic researcher from Columbia University. The author has contributed to research in topics: Robustness (computer science) & Computer science. The author has an hindex of 11, co-authored 27 publications receiving 1111 citations. Previous affiliations of Shiqi Wang include Shanghai Jiao Tong University.

Papers
More filters
Proceedings Article

Efficient Formal Safety Analysis of Neural Networks

TL;DR: This paper presents a new efficient approach for rigorously checking different safety properties of neural networks that significantly outperforms existing approaches by multiple orders of magnitude and believes that this approach to estimating tight output bounds of a network for a given input range can also help improve the explainability of Neural networks and guide the training process of more robust neural networks.
Posted Content

Formal Security Analysis of Neural Networks using Symbolic Intervals

TL;DR: This paper designs, implements, and evaluates a new direction for formally checking security properties of DNNs without using SMT solvers, and leverages interval arithmetic to compute rigorous bounds on the DNN outputs, which is easily parallelizable.
Proceedings ArticleDOI

ContexIoT: Towards Providing Contextual Integrity to Appified IoT Platforms

TL;DR: ContexIoT is proposed, a context-based permission system for appified IoT platforms that provides contextual integrity by supporting fine-grained context identification for sensitive actions, and runtime prompts with rich context information to help users perform effective access control.
Proceedings Article

Formal Security Analysis of Neural Networks using Symbolic Intervals

TL;DR: ReluVal as mentioned in this paper leverages interval arithmetic to compute rigorous bounds on the DNN outputs, which is a promising new direction towards rigorously analyzing different security properties of DNNs.
Posted Content

MixTrain: Scalable Training of Formally Robust Neural Networks.

TL;DR: Stochastic robust approximation and dynamic mixed training are proposed to drastically improve the efficiency of verifiably robust training without sacrificing verified robustness, and MixTrain can achieve up to 95.2% verified robust accuracy against norm-bounded attackers.