scispace - formally typeset
Open AccessPosted Content

TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up

TLDR
TransGAN as discussed by the authors proposes a memory-friendly transformer-based generator that progressively increases feature resolution, and correspondingly a multi-scale discriminator to capture simultaneously semantic contexts and low-level textures.
Abstract
The recent explosive interest on transformers has suggested their potential to become powerful ``universal" models for computer vision tasks, such as classification, detection, and segmentation. While those attempts mainly study the discriminative models, we explore transformers on some more notoriously difficult vision tasks, e.g., generative adversarial networks (GANs). Our goal is to conduct the first pilot study in building a GAN completely free of convolutions, using only pure transformer-based architectures. Our vanilla GAN architecture, dubbed TransGAN, consists of a memory-friendly transformer-based generator that progressively increases feature resolution, and correspondingly a multi-scale discriminator to capture simultaneously semantic contexts and low-level textures. On top of them, we introduce the new module of grid self-attention for alleviating the memory bottleneck further, in order to scale up TransGAN to high-resolution generation. We also develop a unique training recipe including a series of techniques that can mitigate the training instability issues of TransGAN, such as data augmentation, modified normalization, and relative position encoding. Our best architecture achieves highly competitive performance compared to current state-of-the-art GANs using convolutional backbones. Specifically, TransGAN sets new state-of-the-art inception score of 10.43 and FID of 18.28 on STL-10, outperforming StyleGAN-V2. When it comes to higher-resolution (e.g. 256 x 256) generation tasks, such as on CelebA-HQ and LSUN-Church, TransGAN continues to produce diverse visual examples with high fidelity and impressive texture details. In addition, we dive deep into the transformer-based generation models to understand how their behaviors differ from convolutional ones, by visualizing training dynamics. The code is available at this https URL.

read more

Citations
More filters
Journal ArticleDOI

On the generation of realistic synthetic petrographic datasets using a style-based GAN

TL;DR: PetGAN as discussed by the authors adopts the architecture of StyleGAN2 with adaptive discriminator augmentation to allow robust replication of statistical and esthetical characteristics and improve the internal variance of petrographic data.
Posted Content

AAformer: Auto-Aligned Transformer for Person Re-Identification.

TL;DR: Li et al. as discussed by the authors proposed an alignment scheme in Transformer architecture for the first time and proposed the Auto-Aligned Transformer (AAformer) to automatically locate both the human parts and non-human ones at patch-level.
Journal ArticleDOI

Fast simulation of the electromagnetic calorimeter response using Self-Attention Generative Adversarial Networks

TL;DR: The Self-Attention Generative Adversarial Network is proposed as a possible improvement of the network architecture and demonstrated on the performance of generating responses of the LHCb type of the electromagnetic calorimeter.
Posted Content

MViT: Mask Vision Transformer for Facial Expression Recognition in the wild.

TL;DR: Zhang et al. as discussed by the authors proposed a pure transformer-based mask vision transformer (MVT) for facial expression recognition in the wild, which consists of two modules: a transformerbased mask generation network (MGN) to generate a mask that can filter out complex backgrounds and occlusion of face images, and a dynamic relabeling module to rectify incorrect labels in FER datasets in the field.
Posted Content

NViT: Vision Transformer Compression and Parameter Redistribution.

TL;DR: In this article, the authors apply global, structural pruning with latency-aware regularization on all parameters of the Vision Transformer (ViT) model for latency reduction, and find interesting regularities in the final weight structure.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Attention is All you Need

TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings ArticleDOI

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

TL;DR: BERT as mentioned in this paper pre-trains deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
Dissertation

Learning Multiple Layers of Features from Tiny Images

TL;DR: In this paper, the authors describe how to train a multi-layer generative model of natural images, using a dataset of millions of tiny colour images, described in the next section.
Related Papers (5)