Open AccessPosted Content
Do Neural Optimal Transport Solvers Work? A Continuous Wasserstein-2 Benchmark
Alexander Korotin,Lingxiao Li,Aude Genevay,Justin Solomon,Alexander Nikolaevich Filippov,Evgeny Burnaev +5 more
Reads0
Chats0
TLDR
In this paper, the Wasserstein-2 distance was used to evaluate the performance of neural network-based optimal transport (OT) solvers for quadratic-cost transport.Abstract:
Despite the recent popularity of neural network-based solvers for optimal transport (OT), there is no standard quantitative way to evaluate their performance. In this paper, we address this issue for quadratic-cost transport -- specifically, computation of the Wasserstein-2 distance, a commonly-used formulation of optimal transport in machine learning. To overcome the challenge of computing ground truth transport maps between continuous measures needed to assess these solvers, we use input-convex neural networks (ICNN) to construct pairs of measures whose ground truth OT maps can be obtained analytically. This strategy yields pairs of continuous benchmark measures in high-dimensional spaces such as spaces of images. We thoroughly evaluate existing optimal transport solvers using these benchmark measures. Even though these solvers perform well in downstream tasks, many do not faithfully recover optimal transport maps. To investigate the cause of this discrepancy, we further test the solvers in a setting of image generation. Our study reveals crucial limitations of existing solvers and shows that increased OT accuracy does not necessarily correlate to better results downstream.read more
Citations
More filters
Posted Content
On Transportation of Mini-batches: A Hierarchical Approach.
TL;DR: In this article, a batch of mini-batches optimal transport (BoMb-OT) is proposed to find the optimal coupling between mini-batch and it can be seen as an approximation to a well-defined distance on the space of probability measures.
Posted Content
Generative Modeling with Optimal Transport Maps.
TL;DR: In this paper, a min-max optimization algorithm was proposed to efficiently compute OT maps for the quadratic cost (Wasserstein-2 distance) of the Wasserstein GAN.
Posted Content
Physics Informed Convex Artificial Neural Networks (PICANNs) for Optimal Transport based Density Estimation.
TL;DR: In this paper, a deep learning approach is proposed to solve the continuous OMT problem, which can be reduced to solving a non-linear PDE of Monge-Ampere type whose solution is a convex function.
References
More filters
Posted Content
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: In this article, the adaptive estimates of lower-order moments are used for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimate of lowerorder moments.
Proceedings ArticleDOI
Deep Learning Face Attributes in the Wild
TL;DR: A novel deep learning framework for attribute prediction in the wild that cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently.
Book
Topics in Optimal Transportation
TL;DR: In this paper, the metric side of optimal transportation is considered from a differential point of view on optimal transportation, and the Kantorovich duality of the optimal transportation problem is investigated.
Proceedings Article
GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium
TL;DR: In this paper, a two time-scale update rule (TTUR) was proposed for training GANs with stochastic gradient descent on arbitrary GAN loss functions, which has an individual learning rate for both the discriminator and the generator.
Proceedings Article
Improved training of wasserstein GANs
TL;DR: The authors proposed to penalize the norm of the gradient of the critic with respect to its input to improve the training stability of Wasserstein GANs and achieve stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.