TFCheck : A TensorFlow Library for Detecting Training Issues in Neural Network Programs
Houssem Ben Braiek,Foutse Khomh +1 more
- pp 426-433
Reads0
Chats0
TLDR
This paper examines training issues in ML programs and proposes a catalog of verification routines that can be used to detect the identified issues, automatically, in a Tensorflow-based library named TFCheck.Abstract:
The increasing inclusion of Machine Learning (ML) models in safety-critical systems like autonomous cars have led to the development of multiple model-based ML testing techniques. One common denominator of these testing techniques is their assumption that training programs are adequate and bug-free. These techniques only focus on assessing the performance of the constructed model using manually labeled data or automatically generated data. However, their assumptions about the training program are not always true as training programs can contain inconsistencies and bugs. In this paper, we examine training issues in ML programs and propose a catalog of verification routines that can be used to detect the identified issues, automatically. We implemented the routines in a Tensorflow-based library named TFCheck. Using TFCheck, practitioners can detect the aforementioned issues automatically. To assess the effectiveness of TFCheck, we conducted a case study with real-world, mutants, and synthetic training programs. Results show that TFCheck can successfully detect training issues in ML code implementations.read more
Citations
More filters
Journal ArticleDOI
Software Verification and Validation of Safe Autonomous Cars: A Systematic Literature Review
TL;DR: In this article, the relevant research literature in recent years has been systematically reviewed and classified in order to investigate the state-of-the-art in the software verification and validation (V&V) of autonomous cars.
Posted Content
On Testing Machine Learning Programs
Houssem Ben Braiek,Foutse Khomh +1 more
TL;DR: This comprehensive review of software testing practices for Machine learning models will help ML engineers identify the right approach to improve the reliability of their ML-based systems.
Proceedings ArticleDOI
Exposing numerical bugs in deep learning via gradient back-propagation
TL;DR: GRIST as mentioned in this paper automatically generates a small input that can expose numerical bugs in DL programs by piggybacking on the built-in gradient computation functionalities of DL infrastructures.
Proceedings ArticleDOI
UMLAUT: Debugging Deep Learning Programs using Program Structure and Model Behavior
TL;DR: Umlaut as discussed by the authors identifies DL debugging heuristics and strategies used by experts, and categorizes the types of errors novices run into when writing ML code, and map them onto opportunities where tools could help.
Journal ArticleDOI
Input Fields Recognition in Documents Using Deep Learning Techniques
TL;DR: A Deep Learning based approach to detect input fields from a form or document which consists of text, images and input fields like textbox, checkbox to detect limited types of input fields generally appearing on printed forms is presented.
References
More filters
Proceedings Article
Understanding the difficulty of training deep feedforward neural networks
Xavier Glorot,Yoshua Bengio +1 more
TL;DR: The objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future.
Journal ArticleDOI
Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow
Kanit Wongsuphasawat,Daniel Smilkov,James Wexler,Jimbo Wilson,Dandelion Mane,Doug Fritz,Dilip Krishnan,Fernanda B. Viégas,Martin Wattenberg +8 more
TL;DR: Overall, users find the TensorFlow Graph Visualizer useful for understanding, debugging, and sharing the structures of their models.
Proceedings ArticleDOI
An empirical study on TensorFlow program bugs
TL;DR: This work studied deep learning applications built on top of Tensor Flow and collected program bugs related to TensorFlow from StackOverflow QA pages and Github projects to examine the root causes and symptoms of coding defects in Tensorflow programs.
Proceedings ArticleDOI
Identifying implementation bugs in machine learning based image classifiers using metamorphic testing
Anurag Dwarakanath,Manish Ahuja,Samarth Sikand,Raghotham M. Rao,R. P. Jagadeesh Chandra Bose,Neville Dubash,Sanjay Podder +6 more
TL;DR: In this article, the authors present an articulation of the challenges in testing ML based applications and present a solution approach based on the concept of metamorphic testing, which aims to identify implementation bugs in ML based image classifiers.
Journal ArticleDOI
On testing machine learning programs
Houssem Ben Braiek,Foutse Khomh +1 more
TL;DR: A comprehensive review of software testing practices for ML programs can be found in this paper, where the authors identify and explain challenges that should be addressed when testing ML programs and report existing solutions found in the literature.
Related Papers (5)
“Boxing Clever”: Practical Techniques for Gaining Insights into Training Data and Monitoring Distribution Shift
Rob Ashmore,Matthew Hill +1 more