X
Xiaowei Huang
Researcher at University of Liverpool
Publications - 141
Citations - 4082
Xiaowei Huang is an academic researcher from University of Liverpool. The author has contributed to research in topics: Computer science & Model checking. The author has an hindex of 22, co-authored 115 publications receiving 2866 citations. Previous affiliations of Xiaowei Huang include University of New South Wales & University of Oxford.
Papers
More filters
Book ChapterDOI
Safety Verification of Deep Neural Networks
TL;DR: A novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT) is developed, which defines safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image.
Posted Content
Safety Verification of Deep Neural Networks
TL;DR: In this article, a verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT) is proposed to guarantee the safety of image classification decisions with respect to image manipulations, such as scratches or changes to camera angle or lighting conditions.
Proceedings ArticleDOI
Concolic testing for deep neural networks
TL;DR: The first concolic testing approach for Deep Neural Networks (DNNs) is presented, which formalise coverage criteria for DNNs that have been studied in the literature, and develops a coherent method for performing concolicTesting to increase test coverage.
Posted Content
Concolic Testing for Deep Neural Networks.
TL;DR: In this article, the authors present the first concolic testing approach for deep neural networks (DNNs), which combines program execution and symbolic analysis to explore the execution paths of a software program.
Book ChapterDOI
Feature-Guided Black-Box Safety Testing of Deep Neural Networks
TL;DR: In this paper, a two-player turn-based stochastic game is formulated to generate adversarial examples, where the first player's objective is to minimize the distance to an adversarial example by manipulating the features, and the second player can be cooperative, adversarial, or random.