scispace - formally typeset
Open AccessProceedings ArticleDOI

Semantic Image Inpainting with Deep Generative Models

Reads0
Chats0
TLDR
A novel method for semantic image inpainting, which generates the missing content by conditioning on the available data, and successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.
Abstract
Semantic image inpainting is a challenging task where large missing regions have to be filled based on the available visual data. Existing methods which extract information from only a single image generally produce unsatisfactory results due to the lack of high level context. In this paper, we propose a novel method for semantic image inpainting, which generates the missing content by conditioning on the available data. Given a trained generative model, we search for the closest encoding of the corrupted image in the latent image manifold using our context and prior losses. This encoding is then passed through the generative model to infer the missing content. In our method, inference is possible irrespective of how the missing content is structured, while the state-of-the-art learning based method requires specific information about the holes in the training phase. Experiments on three datasets show that our method successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI

Transforming and Projecting Images into Class-Conditional Generative Networks

TL;DR: It is demonstrated that one can solve for image translation, scale, and global color transformation, during the projection optimization to address the object-center bias and color bias of a Generative Adversarial Network.
Proceedings ArticleDOI

Learned Map Prediction for Enhanced Mobile Robot Exploration

TL;DR: An autonomous ground robot capable of exploring unknown indoor environments for reconstructing their 2D maps is demonstrated and an advantage over end-to-end learned exploration methods is retained in that the robot’s behavior is easily explicable in terms of the predicted map.
Proceedings Article

Unpaired point cloud completion on real scans using adversarial training

TL;DR: This work develops a first approach that works directly on input point clouds, does not require paired training data, and hence can directly be applied to real scans for scan completion.
Proceedings ArticleDOI

WarpGAN: Automatic Caricature Generation

TL;DR: WarpGAN as mentioned in this paper learns to automatically predict a set of control points that can warp the photo into a caricature, while preserving identity and allowing customization of the generated caricatures by controlling the exaggeration extent and the visual styles.
Posted Content

Image Synthesis with a Single (Robust) Classifier

TL;DR: It turns out that adversarial robustness is precisely what the authors need to directly manipulate salient features of the input to demonstrate the utility of robustness in the broader machine learning context.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Journal Article

Visualizing Data using t-SNE

TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Proceedings Article

Auto-Encoding Variational Bayes

TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Related Papers (5)