scispace - formally typeset
D

Di Ma

Researcher at University of Bristol

Publications -  25
Citations -  278

Di Ma is an academic researcher from University of Bristol. The author has contributed to research in topics: Data compression & Convolutional neural network. The author has an hindex of 7, co-authored 25 publications receiving 140 citations. Previous affiliations of Di Ma include Civil Aviation University of China.

Papers
More filters
Journal ArticleDOI

BVI-DVC: A Training Database for Deep Video Compression

TL;DR: A new extensive and representative video database, BVI-DVC, is presented for training CNN-based video compression systems, with specific emphasis on machine learning tools that enhance conventional coding architectures, including spatial resolution and bit depth up-sampling, post-processing and in-loop filtering.
Journal ArticleDOI

MFRNet: A New CNN Architecture for Post-Processing and In-loop Filtering

TL;DR: A novel convolutional neural network architecture, MFRNet, for post-processing (PP) and in-loop filtering (ILF) in the context of video compression with significant and consistent coding gains over both anchor codecs and also over other existing CNN-based PP/ILF approaches based on Bjøntegaard Delta measurements.
Proceedings ArticleDOI

Gan-Based Effective Bit Depth Adaptation for Perceptual Video Compression

TL;DR: A convolutional neural networks (CNN) based EBD adaptation method is presented for perceptual video compression, in which the employed CNN models are trained using a generative adversarial network (GAN), with perception-based loss functions.
Journal ArticleDOI

BVI-DVC: A Training Database for Deep Video Compression

TL;DR: The BVI-DVC dataset as discussed by the authors contains 800 sequences at various spatial resolutions from 270p to 2160p and has been evaluated on ten existing network architectures for four different coding tools.
Proceedings ArticleDOI

Perceptually-inspired super-resolution of compressed videos

TL;DR: A perceptually-inspired super-resolution approach (M-SRGAN) is proposed for spatial up-sampling of compressed video using a modified CNN model, which has been trained using a generative adversarial network (GAN) on compressed content with perceptual loss functions.