scispace - formally typeset
Proceedings ArticleDOI

High-Frequency Refinement for Sharper Video Super-Resolution

Reads0
Chats0
TLDR
An upsampling network architecture ‘HFR-Net’ that works on the principle of ‘explicit refinement and fusion of high-frequency details’ is proposed and a novel technique named 2-phase progressive-retrogressive training is being proposed to train and implement this principle.
Abstract
A video super-resolution technique is expected to generate a ‘sharp’ upsampled video. The sharpness in the generated video comes from the precise prediction of the high-frequency details (e.g. object edges). Thus high-frequency prediction becomes a vital sub-problem of the super-resolution task. To generate a sharp-upsampled video, this paper proposes an upsampling network architecture ‘HFR-Net’ that works on the principle of ‘explicit refinement and fusion of high-frequency details’. To implement this principle and to train HFR-Net, a novel technique named 2-phase progressive-retrogressive training is being proposed. Additionally, a method called dual motion warping is also being introduced to preprocess the videos that have varying motion intensities (slow and fast). Results on multiple video datasets demonstrate the improved performance of our approach over the current state-of-the-art.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Refining high-frequencies for sharper super-resolution and deblurring

TL;DR: A multi-stage neural network architecture ‘HFR-Net’ is proposed that works on the principle of ‘explicit refinement and fusion of high-frequency details’ that gives better results than the current state-of-the-art techniques.
Posted Content

On the Performance of Convolutional Neural Networks under High and Low Frequency Information

TL;DR: It is observed that the trained CNN fails to generalize over the high and low frequency images, and the stochastic filtering based data augmentation during training is proposed in order to make the CNN robust against high andLow frequency images.
Book ChapterDOI

On the Performance of Convolutional Neural Networks Under High and Low Frequency Information

TL;DR: In this article, a stochastic filtering-based data augmentation approach was proposed to improve the robustness of CNN models over high and low frequency images, and the improved generalizability and robustness was observed.
Proceedings ArticleDOI

ICNet: Joint Alignment and Reconstruction via Iterative Collaboration for Video Super-Resolution

TL;DR: This paper proposes a novel many-to-many VSR framework with Iterative Collaboration (ICNet), which employs the concurrent operation by iterative collaboration between alignment and reconstruction proving to be more efficient and effective than existing recurrent and sliding-window frameworks.
References
More filters
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Proceedings ArticleDOI

Densely Connected Convolutional Networks

TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Posted Content

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

TL;DR: This work proposes a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit and derives a robust initialization method that particularly considers the rectifier nonlinearities.
Proceedings ArticleDOI

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

TL;DR: In this paper, a Parametric Rectified Linear Unit (PReLU) was proposed to improve model fitting with nearly zero extra computational cost and little overfitting risk, which achieved a 4.94% top-5 test error on ImageNet 2012 classification dataset.
Proceedings Article

Understanding the difficulty of training deep feedforward neural networks

TL;DR: The objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future.
Related Papers (5)