scispace - formally typeset
Open AccessBook ChapterDOI

Improved Geometric Path Enumeration for Verifying ReLU Neural Networks

Reads0
Chats0
TLDR
This paper works to address the runtime problem by improving upon a recently-proposed geometric path enumeration method, and demonstrates significant speed improvement of exact analysis on the well-studied ACAS Xu benchmarks, sometimes hundreds of times faster than the original implementation.
Abstract
Neural networks provide quick approximations to complex functions, and have been increasingly used in perception as well as control tasks. For use in mission-critical and safety-critical applications, however, it is important to be able to analyze what a neural network can and cannot do. For feed-forward neural networks with ReLU activation functions, although exact analysis is NP-complete, recently-proposed verification methods can sometimes succeed.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book

Algorithms for Verifying Deep Neural Networks

TL;DR: Algorithms for Verifying Deep Neural Networks as discussed by the authors is a survey of methods that are capable of formally verifying properties of deep neural networks, including affine transformation, nonlinear transformation, and linear transformation.
Proceedings ArticleDOI

Are Formal Methods Applicable To Machine Learning And Artificial Intelligence?

TL;DR: In this paper , state-of-the-art formal methods for the verification and validation of machine learning systems in particular are presented. But the verification methods are not restricted to machine learning.
Posted Content

Algorithms for Verifying Deep Neural Networks

TL;DR: This article surveys methods that have emerged recently for soundly verifying whether a particular network satisfies certain input-output properties and provides pedagogical implementations of existing methods and compare them on a set of benchmark problems.
Book ChapterDOI

nnenum: Verification of ReLU Neural Networks with Optimized Abstraction Refinement

TL;DR: The nnenum tool as mentioned in this paper uses fast abstractions for speed, combined with refinement through ReLU splitting to increase accuracy when properties cannot be proven, which is a classic approach in formal methods, but directly applying it to the neural network verification problem actually reduces performance due to a cascade of overapproxmation error when using abstraction.
Proceedings ArticleDOI

Efficient Neural Network Analysis with Sum-of-Infeasibilities

TL;DR: It is demonstrated that SoI significantly improves the performance of an existing complete search procedure and can improve upon the perturbation bound derived by a recent adversarial attack algorithm.
References
More filters
Proceedings Article

Explaining and Harnessing Adversarial Examples

TL;DR: It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
Book ChapterDOI

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks

TL;DR: In this paper, the authors presented a scalable and efficient technique for verifying properties of deep neural networks (or providing counter-examples) based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function.
Proceedings ArticleDOI

Multi-Parametric Toolbox 3.0

TL;DR: The Multi-Parametric Toolbox is a collection of algorithms for modeling, control, analysis, and deployment of constrained optimal controllers developed under Matlab that features a powerful geometric library that extends the application of the toolbox beyond optimal control to various problems arising in computational geometry.

The SMT-LIB Standard Version 2.0

TL;DR: This paper introduces Version 2 of the SMT-LIB Standard, a major upgrade of the previous Version 1.2 which, in addition to simplifying and extending the languages of that version, includes a new command language for interfacing with SMT solvers.
Proceedings ArticleDOI

AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation

TL;DR: This work presents AI2, the first sound and scalable analyzer for deep neural networks, and introduces abstract transformers that capture the behavior of fully connected and convolutional neural network layers with rectified linear unit activations (ReLU), as well as max pooling layers.
Related Papers (5)