scispace - formally typeset
Search or ask a question
Author

Xintian Wu

Bio: Xintian Wu is an academic researcher. The author has contributed to research in topics: Normalization (statistics) & Overfitting. The author has an hindex of 1, co-authored 2 publications receiving 2 citations.

Papers
More filters
Posted Content
19 Aug 2020
TL;DR: A comprehensive survey of the regularization and normalization technologies from different perspectives of GANs training can be found in this paper, where the authors systematically and comprehensively describe the different perspectives and obtain the different purposes of normalization and regularization.
Abstract: Generative Adversarial Networks (GANs) have been widely applied in different scenarios thanks to the development of deep neural networks. The proposal of original GAN is based upon the non-parametric assumption of the infinite capacity of networks. It is still unknown whether GANs can generate realistic samples without any prior information. Due to the overconfident assumption, many issues need to be addressed in GANs' training, such as non-convergence, mode collapses, gradient vanishing, overfitting, discriminator forgetting, and the sensitivity of hyperparameters. As acknowledged, regularization and normalization are common methods of introducing prior information that can be used for stabilizing training and improving discrimination. At present, many regularization and normalization methods are proposed in GANs. However, as far as we know, there is no existing survey that has particularly focused on the systematic purposes and developments of these solutions. In this work, we perform a comprehensive survey of the regularization and normalization technologies from different perspectives of GANs training. First, we systematically and comprehensively describe the different perspectives of GANs training and thus obtain the different purposes of regularization and normalization in GANs training. In accordance with the different purposes, we propose a new taxonomy and summary a large number of existing studies. Furthermore, we compare the performance of the mainstream methods on different datasets fairly and investigate the regularization and normalization technologies that have been frequently employed in SOTA GANs. Finally, we highlight the possible future studies in this area.

2 citations

Posted Content
TL;DR: A comprehensive survey on the regularization and normalization techniques from different perspectives of GANs training can be found in this article, where the authors systematically describe different perspectives and obtain the different objectives of regularization.
Abstract: Generative Adversarial Networks (GANs) have been widely applied in different scenarios thanks to the development of deep neural networks. The original GAN was proposed based on the non-parametric assumption of the infinite capacity of networks. However, it is still unknown whether GANs can generate realistic samples without any prior information. Due to the overconfident assumption, many issues remain unaddressed in GANs' training, such as non-convergence, mode collapses, gradient vanishing. Regularization and normalization are common methods of introducing prior information to stabilize training and improve discrimination. Although a handful number of regularization and normalization methods have been proposed for GANs, to the best of our knowledge, there exists no comprehensive survey which primarily focuses on objectives and development of these methods, apart from some in-comprehensive and limited scope studies. In this work, we conduct a comprehensive survey on the regularization and normalization techniques from different perspectives of GANs training. First, we systematically describe different perspectives of GANs training and thus obtain the different objectives of regularization and normalization. Based on these objectives, we propose a new taxonomy. Furthermore, we compare the performance of the mainstream methods on different datasets and investigate the regularization and normalization techniques that have been frequently employed in SOTA GANs. Finally, we highlight potential future directions of research in this domain.

Cited by
More filters
Posted Content
Ziqiang Li, Pengfei Xia, Xue Rui, Yanghui Hu, Bin Li 
TL;DR: In this article, two preprocessing methods, High-frequency Confusion (HFC) and High-Frequency Filter (HFF), are proposed to eliminate high-frequency differences in GANs training.
Abstract: Advancements in Generative Adversarial Networks (GANs) have the ability to generate realistic images that are visually indistinguishable from real images. However, recent studies of the image spectrum have demonstrated that generated and real images share significant differences at high frequency. Furthermore, the high-frequency components invisible to human eyes affect the decision of CNNs and are related to the robustness of it. Similarly, whether the discriminator will be sensitive to the high-frequency differences, thus reducing the fitting ability of the generator to the low-frequency components is an open problem. In this paper, we demonstrate that the discriminator in GANs is sensitive to such high-frequency differences that can not be distinguished by humans and the high-frequency components of images are not conducive to the training of GANs. Based on these, we propose two preprocessing methods eliminating high-frequency differences in GANs training: High-Frequency Confusion (HFC) and High-Frequency Filter (HFF). The proposed methods are general and can be easily applied to most existing GANs frameworks with a fraction of the cost. The advanced performance of the proposed method is verified on multiple loss functions, network architectures, and datasets.

2 citations

Posted Content
TL;DR: In this article, a new approach inspired by works on adversarial attack is proposed to stabilize the training process of GANs, which is found that sometimes the images generated by the generator play a role just like adversarial examples for discriminator, which might be a part of the reason of the unstable training.
Abstract: Generative Adversarial Networks (GANs) are the most popular models for image generation by optimizing discriminator and generator jointly and gradually. However, instability in training process is still one of the open problems for all GAN-based algorithms. In order to stabilize training, some regularization and normalization techniques have been proposed to make discriminator meet the Lipschitz continuity constraint. In this paper, a new approach inspired by works on adversarial attack is proposed to stabilize the training process of GANs. It is found that sometimes the images generated by the generator play a role just like adversarial examples for discriminator during the training process, which might be a part of the reason of the unstable training. With this discovery, we propose to introduce a adversarial training method into the training process of GANs to improve its stabilization. We prove that this DAT can limit the Lipschitz constant of the discriminator adaptively. The advanced performance of the proposed method is verified on multiple baseline and SOTA networks, such as DCGAN, WGAN, Spectral Normalization GAN, Self-supervised GAN and Information Maximum GAN.

1 citations