scispace - formally typeset
Open AccessPosted Content

Practical Machine Learning Safety: A Survey and Primer

TLDR
In this article, the authors review and organize practical ML techniques that can improve the safety and dependability of ML algorithms and therefore ML-based software and discuss research gaps as well as promising solutions.
Abstract
The open-world deployment of Machine Learning (ML) algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities such as interpretability, verifiability, and performance limitations. Research explores different approaches to improve ML dependability by proposing new models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks. In this paper, we review and organize practical ML techniques that can improve the safety and dependability of ML algorithms and therefore ML-based software. Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects, and discuss research gaps as well as promising solutions.

read more

Citations
More filters
Posted Content

Generalized Out-of-Distribution Detection: A Survey

TL;DR: In this paper, the authors present a generic framework called generalized OOD detection, which encompasses the five aforementioned problems, i.e., anomaly detection (AD), novelty detection (ND), open set recognition (OSR), and outlier detection (OD).
Posted Content

AugMax: Adversarial Composition of Random Augmentations for Robust Training.

TL;DR: AugMax as discussed by the authors proposes a disentangled normalization module to disentangle the instance-wise feature heterogeneity arising from AugMax, which leads to a significantly augmented input distribution.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Journal Article

Visualizing Data using t-SNE

TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Related Papers (5)