scispace - formally typeset
Z

Zhixin Wang

Researcher at South China University of Technology

Publications -  6
Citations -  346

Zhixin Wang is an academic researcher from South China University of Technology. The author has contributed to research in topics: Computer science & Categorization. The author has an hindex of 2, co-authored 3 publications receiving 207 citations.

Papers
More filters
Posted Content

Frustum ConvNet: Sliding Frustums to Aggregate Local Point-Wise Features for Amodal 3D Object Detection

TL;DR: Frustum ConvNet (F-ConvNet) as mentioned in this paper aggregates point-wise features as frustum-level feature vectors, and arrays these feature vectors as a feature map for use of its subsequent component of fully convolutional network (FCN), which spatially fuses frustumlevel features and supports an end-to-end and continuous estimation of oriented boxes in the 3D space.
Journal ArticleDOI

Part-Aware Fine-Grained Object Categorization Using Weakly Supervised Part Detection Network

TL;DR: In this article, a weakly supervised part detection network (PartNet) is proposed to detect discriminative local parts for the use of fine-grained categorization, which achieves state-of-the-art performance on CUB-200-2011 and Oxford-IIIT Pet datasets.
Journal ArticleDOI

Part-Aware Fine-grained Object Categorization using Weakly Supervised Part Detection Network

TL;DR: In this article, a weakly supervised part detection network (PartNet) is proposed to detect discriminative local parts for use of fine-grained categorization, which achieves state-of-the-art performance on CUB-200-2011 dataset when ground-truth part annotations are not available.
Journal ArticleDOI

DR2: Diffusion-based Robust Degradation Remover for Blind Face Restoration

TL;DR: Wang et al. as mentioned in this paper proposed diffusion-based robust degradation removal (DR2) to transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image.
Journal ArticleDOI

Unpaired Face Restoration via Learnable Cross-Quality Shift

TL;DR: This work takes advantage of the editing capabilities of StyleGAN’s latent code and proposes a novel learnable cross-quality shift, which introduces the generative facial priors into the unpaired framework and enables the straight-forward addition/subtraction in the latent feature space to achieve quality conversion.