scispace - formally typeset
M

Mingqing Xiao

Researcher at Peking University

Publications -  19
Citations -  277

Mingqing Xiao is an academic researcher from Peking University. The author has contributed to research in topics: Computer science & Medicine. The author has an hindex of 5, co-authored 7 publications receiving 78 citations. Previous affiliations of Mingqing Xiao include Microsoft.

Papers
More filters
Book ChapterDOI

Invertible Image Rescaling

TL;DR: An Invertible Rescaling Net (IRN) is developed with deliberately designed framework and objectives to produce visually-pleasing low-resolution images and meanwhile capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
Posted Content

Invertible Image Rescaling

TL;DR: Wang et al. as discussed by the authors developed an Invertible Rescaling Net (IRN) with deliberately designed framework and objectives to produce visually-pleasing low-resolution images and meanwhile capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
Proceedings ArticleDOI

Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation

TL;DR: The proposed Differentiation on Spike Representation (DSR) method can achieve state-of-the-art SNN performance with low latency on both static and neuromorphic datasets, including CIFar-10, CIFAR-100, ImageNet, and DVS-CIFAR10.
Posted Content

TDAPNet: Prototype Network with Recurrent Top-Down Attention for Robust Object Classification under Partial Occlusion

TL;DR: This work integrates inductive priors including prototypes, partial matching and top-down modulation into deep neural networks to realize robust object classification under novel occlusion conditions, with limited Occlusion in training data.
Journal ArticleDOI

Training Neural Networks by Lifted Proximal Operator Machines.

TL;DR: The lifted proximal operator machine (LPOM) as mentioned in this paper was proposed to train fully-connected feed-forward neural networks, which is block multi-convex in all layer-wise weights and activations.