Open AccessProceedings Article
Deep Verifier Networks: Verification of Deep Discriminative Models with Deep Generative Models.
Tong Che,Xiaofeng Liu,Site Li,Yubin Ge,Ruixiang Zhang,Caiming Xiong,Yoshua Bengio +6 more
- Vol. 35, Iss: 8, pp 7002-7010
TLDR
Deep verifier networks (DVN) as mentioned in this paper uses conditional variational auto-encoders with disentanglement constraints to separate the label information from the latent representation.Abstract:
AI Safety is a major concern in many deep learning applications such as autonomous driving. Given a trained deep learning model, an important natural problem is how to reliably verify the model's prediction. In this paper, we propose a novel framework --- deep verifier networks (DVN) to detect unreliable inputs or predictions of deep discriminative models, using separately trained deep generative models. Our proposed model is based on conditional variational auto-encoders with disentanglement constraints to separate the label information from the latent representation. We give both intuitive and theoretical justifications for the model. Our verifier network is trained independently with the prediction model, which eliminates the need of retraining the verifier network for a new model. We test the verifier network on both out-of-distribution detection and adversarial example detection problems, as well as anomaly detection problems in structured prediction tasks such as image caption generation. We achieve state-of-the-art results in all of these problems.read more
Citations
More filters
Book ChapterDOI
Generative Self-training for Cross-domain Unsupervised Tagged-to-Cine MRI Synthesis.
Xiaofeng Liu,Fangxu Xing,Maureen Stone,Jiachen Zhuo,Timothy G. Reese,Jerry L. Prince,Georges El Fakhri,Jonghye Woo +7 more
TL;DR: Li et al. as mentioned in this paper proposed a generative self-training (GST) UDA framework with continuous value prediction and regression objective for cross-domain image synthesis, which filters the pseudo-label with an uncertainty mask, and quantifies the predictive confidence of generated images with practical variational Bayes learning.
Posted Content
Adversarial Unsupervised Domain Adaptation with Conditional and Label Shift: Infer, Align and Iterate
Xiaofeng Liu,Zhenhua Guo,Site Li,Fangxu Xing,Jane You,C.-C. Jay Kuo,Georges El Fakhri,Jonghye Woo +7 more
TL;DR: In this article, the authors propose an adversarial unsupervised domain adaptation (UDA) approach with the inherent conditional and label shifts, in which they aim to align the distributions w.r.t.
Journal ArticleDOI
Unsupervised Black-Box Model Domain Adaptation for Brain Tumor Segmentation
Xiaofeng Liu,Chaehwa Yoo,Fangxu Xing,C.-C. Jay Kuo,Georges El Fakhri,Jeonho Kang,Jong-Hyung Woo +6 more
TL;DR: A practical framework for UDA is proposed with a black-box segmentation model trained in the source domain only, without relying on source data or a white-box source model in which the network parameters are accessible, leading to performance gain over UDA without entropy minimization.
Posted Content
Generative Self-training for Cross-domain Unsupervised Tagged-to-Cine MRI Synthesis.
Xiaofeng Liu,Fangxu Xing,Maureen Stone,Jiachen Zhuo,Reese Timothy,Jerry L. Prince,Georges El Fakhri,Jonghye Woo +7 more
TL;DR: Li et al. as mentioned in this paper proposed a generative self-training (GST) UDA framework with continuous value prediction and regression objective for cross-domain image synthesis, which filters the pseudo-label with an uncertainty mask, and quantifies the predictive confidence of generated images with practical variational Bayes learning.
Posted Content
Learning to Give Checkable Answers with Prover-Verifier Games.
TL;DR: Prover-Verifier Games (PVGs) as discussed by the authors is a game-theoretic framework to encourage learning agents to solve decision problems in a verifiable manner, where a trusted verifier network tries to choose the correct answer, and a more powerful but untrusted prover network attempts to persuade the verifier of a particular answer, regardless of its correctness.