J
Jianping Lin
Researcher at University of Science and Technology of China
Publications - 11
Citations - 356
Jianping Lin is an academic researcher from University of Science and Technology of China. The author has contributed to research in topics: Reference frame & Motion estimation. The author has an hindex of 5, co-authored 11 publications receiving 148 citations. Previous affiliations of Jianping Lin include Xi'an Jiaotong University.
Papers
More filters
Proceedings ArticleDOI
M-LVC: Multiple Frames Prediction for Learned Video Compression
TL;DR: An end-to-end learned video compression scheme for low-latency scenarios that introduces the usage of the previous multiple frames as references and designs a MV refinement network and a residual refinement network, taking use of the multiple reference frames as well.
Journal ArticleDOI
Deep Learning-Based Video Coding: A Review and a Case Study
TL;DR: Deep Learning Video Coding (DLVC) as mentioned in this paper is based on convolutional neural network (CNN) and block adaptive resolution coding (BLRC) for image/video coding.
Proceedings ArticleDOI
M-LVC: Multiple Frames Prediction for Learned Video Compression
TL;DR: Wang et al. as mentioned in this paper proposed an end-to-end learned video compression scheme for low-latency scenarios, where the motion vector (MV) field is calculated between the current frame and the previous one.
Journal ArticleDOI
Deep Learning-Based Video Coding: A Review and A Case Study
TL;DR: Deep Learning Video Coding (DLVC) as discussed by the authors is a deep learning-based video coding framework, which is based on convolutional neural network (CNN) and block adaptive resolution coding (BARC).
Journal ArticleDOI
Convolutional Neural Network-Based Block Up-Sampling for HEVC
TL;DR: This paper introduces block-level down- and up-sampling into inter-frame coding with the help of CNN and implements the proposed scheme onto the high efficiency video coding (HEVC) reference software and performs a comprehensive set of experiments to evaluate the methods.