scispace - formally typeset
Open AccessProceedings ArticleDOI

Semantic Image Inpainting with Deep Generative Models

Reads0
Chats0
TLDR
A novel method for semantic image inpainting, which generates the missing content by conditioning on the available data, and successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.
Abstract
Semantic image inpainting is a challenging task where large missing regions have to be filled based on the available visual data. Existing methods which extract information from only a single image generally produce unsatisfactory results due to the lack of high level context. In this paper, we propose a novel method for semantic image inpainting, which generates the missing content by conditioning on the available data. Given a trained generative model, we search for the closest encoding of the corrupted image in the latent image manifold using our context and prior losses. This encoding is then passed through the generative model to infer the missing content. In our method, inference is possible irrespective of how the missing content is structured, while the state-of-the-art learning based method requires specific information about the holes in the training phase. Experiments on three datasets show that our method successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Scene Aware Person Image Generation through Global Contextual Conditioning

TL;DR: In this article , a Wasserstein Generative Adversarial Network (WGAN) is used to predict the potential location and the skeletal structure of the new person by conditioning a WGAN on the existing human skeletons present in the scene, and the predicted skeleton is refined through a shallow linear network to achieve higher structural accuracy in the generated image.
Proceedings ArticleDOI

Overview of Image Inpainting Techniques: A Survey

TL;DR: An overview of the traditional methods and deep learning methods which have been used for inpainting task can be found in this article , where traditional methods could accurately fill the missing regions when the hole size is small but failed to inpaint large sized holes and also they cannot hallucinate novel contents.
Journal ArticleDOI

Siamese CNN-based rank learning for quality assessment of inpainted images

TL;DR: Comparative experimental results demonstrate that the proposed deep rank learning-based method outperforms existing NR-IIQA metrics in evaluating both inpainted images and inpainting algorithms.
Journal ArticleDOI

Towards Source-Based Classification of Image Inpainting Techniques: A Survey

TL;DR: Image inpainting is a process of reconstructing an incomplete image from the available information in a visually plausible way as mentioned in this paper, which is a common task in image inpaintering. In the proposed framework, existing image-inpainting methods are clas...
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Journal Article

Visualizing Data using t-SNE

TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Proceedings Article

Auto-Encoding Variational Bayes

TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Related Papers (5)