NTIRE 2019 Challenge on Video Super-Resolution: Methods and Results
read more
Citations
EDVR: Video Restoration With Enhanced Deformable Convolutional Networks
EDVR: Video Restoration with Enhanced Deformable Convolutional Networks
Understanding Deformable Alignment in Video Super-Resolution
Video Super Resolution Based on Deep Learning: A comprehensive survey
NTIRE 2021 Challenge on Image Deblurring
References
Image quality assessment: from error visibility to structural similarity
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
CBAM: Convolutional Block Attention Module
CBAM: Convolutional Block Attention Module
Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network
Related Papers (5)
Frequently Asked Questions (10)
Q2. How do the authors generate the LR frame from the HR REDS?
The authors generate each LR frame from the HR REDS frame by using MATLAB function imresize with bicubic interpolation and downscaling factor 4.
Q3. How does the TTI team integrate spatial and temporal information from consecutive video frames?
8. They integrate spatial and temporal contexts from consecutive video frames using a recurrent encoder-decoder module, that fuses multi-frame information with the more traditional, single frame super-resolution path for the target frame.
Q4. What is the purpose of the motion blurs?
To handle motion blurs in Track 2, they additionally use deblurring stage afterward, feeding the output of the super-resolution model to the deblurring model.
Q5. How many train frames did the participants find sufficient?
Train data REDS dataset [16] has 24000 train frames and all the participants found the amount of data to be sufficient for training their models.
Q6. What is the way to improve the performance of a 3D video restoration network?
The cascaded network can further remove the severe motion blur that cannot be handled by the preceding model and alleviate the inconsistency among output frames.
Q7. How is the high-resolution reference frame obtained?
the high-resolution reference frame is obtained by adding the predicted image residual to a direct upsampled image [10].
Q8. What is the purpose of the challenge?
Challenge phases (1) Development (training) phase: the participants got both LR and HR train video frames and1986Authorized licensed use limited to: ULAKBIM UASL - KOC UNIVERSITY.
Q9. What is the ranking team's proposal for EDVR?
HelloVSR team proposes the EDVR framework [31], which takes 2N + 1 low-resolution frames as inputs and generates a high-resolution output, as shown in Fig.
Q10. What is the way to improve the performance of a video restoration network?
Under a strictly controlled computational budget, they explore the designs of each residual building block in a video restoration network, which consists of a mixture of 2D and 3D convolutional layers.