scispace - formally typeset
Z

Zhiqiang Tang

Researcher at Rutgers University

Publications -  29
Citations -  837

Zhiqiang Tang is an academic researcher from Rutgers University. The author has contributed to research in topics: Computer science & Overfitting. The author has an hindex of 11, co-authored 26 publications receiving 591 citations. Previous affiliations of Zhiqiang Tang include Hebei University of Technology & Amazon.com.

Papers
More filters
Proceedings ArticleDOI

Jointly Optimize Data Augmentation and Network Training: Adversarial Data Augmentation in Human Pose Estimation

TL;DR: In this article, adversarial data augmentation is proposed to solve the problem of overfitting in training deep models, where the generator explores weaknesses of the discriminator and learns from hard augmentations to achieve better performance.
Posted Content

Jointly Optimize Data Augmentation and Network Training: Adversarial Data Augmentation in Human Pose Estimation

TL;DR: The key idea is to design a generator that competes against a discriminator that explores weaknesses of the discriminators, while the discriminator learns from hard augmentations to achieve better performance.
Book ChapterDOI

Quantized Densely Connected U-Nets for Efficient Landmark Localization

TL;DR: This paper proposes quantized densely connected U-Nets for efficient visual landmark localization with order-K dense connectivity to trim off long-distance shortcuts and uses a memory-efficient implementation to significantly boost the training efficiency and investigates an iterative refinement that may slice the model size in half.
Proceedings Article

Semantic-guided multi-attention localization for zero-shot learning

TL;DR: Zhang et al. as discussed by the authors proposed a semantic-guided multi-attention localization model, which automatically discovers the most discriminative parts of objects for zero-shot learning without any human annotations.
Journal ArticleDOI

Towards Efficient U-Nets: A Coupled and Quantized Approach

TL;DR: The results show that the proposed couple stacked U-Nets for efficient visual landmark localization achieves state-of-the-art localization accuracy but using fewer parameters, less inference time, and saving model size.