scispace - formally typeset
Open AccessProceedings ArticleDOI

Semantic Image Inpainting with Deep Generative Models

Reads0
Chats0
TLDR
A novel method for semantic image inpainting, which generates the missing content by conditioning on the available data, and successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.
Abstract
Semantic image inpainting is a challenging task where large missing regions have to be filled based on the available visual data. Existing methods which extract information from only a single image generally produce unsatisfactory results due to the lack of high level context. In this paper, we propose a novel method for semantic image inpainting, which generates the missing content by conditioning on the available data. Given a trained generative model, we search for the closest encoding of the corrupted image in the latent image manifold using our context and prior losses. This encoding is then passed through the generative model to infer the missing content. In our method, inference is possible irrespective of how the missing content is structured, while the state-of-the-art learning based method requires specific information about the holes in the training phase. Experiments on three datasets show that our method successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

SRVAE: super resolution using variational autoencoders

TL;DR: A first of its kind SISR method that takes advantage of a selfevaluating Variational Autoencoder (IntroVAE) and judges the quality of generated high-resolution (HR) images with the target images in an adversarial manner, which allows for high perceptual image generation.
Posted Content

MISO: Mutual Information Loss with Stochastic Style Representations for Multimodal Image-to-Image Translation.

TL;DR: This work designs MILO (Mutual Information LOss), a new stochastically-defined loss function based on information theory that reflects the interpretation of latent variables as a random variable in multimodal translation models.
Journal ArticleDOI

Photo-realistic dehazing via contextual generative adversarial networks

TL;DR: Zhang et al. as mentioned in this paper proposed a new model based on generative adversarial networks (GANs) for single image dehazing, which restores the corresponding hazy-free image directly from a hazy image via a GAN.
Posted Content

Neural Architecture Search for Deep Image Prior

TL;DR: In this paper, a neural architecture search (NAS) technique was proposed to enhance the performance of unsupervised image de-noising, in-painting and super-resolution under the recently proposed Deep Image Prior (DIP).
Posted Content

Training Deep Learning based Denoisers without Ground Truth Data

TL;DR: This work demonstrated that the proposed Stein's Unbiased Risk Estimator (SURE) based method only with noisy input data was able to train CNN based denoising networks that yielded performance close to that of the original MSE based deep learning denoisers with ground truth data.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Journal Article

Visualizing Data using t-SNE

TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Proceedings Article

Auto-Encoding Variational Bayes

TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Related Papers (5)