scispace - formally typeset
Open AccessJournal ArticleDOI

Human and Scene Motion Deblurring Using Pseudo-Blur Synthesizer

Jonathan Samuel Lumentut, +1 more
- 25 Nov 2021 - 
- Vol. 9, pp 146366-146377
TLDR
In this article, an on-the-fly blurry data augmenter that can be run during training and test stages is proposed. And the proposed deblurring module is also equipped with hand-crafted prior extracted using the state-of-theart human body statistical model.
Abstract
Present-day deep learning-based motion deblurring methods utilize the pair of synthetic blur and sharp data to regress any particular framework. This task is designed for directly translating a blurry image input into its restored version as output. The aforementioned approach relies heavily on the quality of the synthetic blurry data, which are only available before the training stage. Handling this issue by providing a large amount of data is expensive for common usage. We answer this challenge by providing an on- the-fly blurry data augmenter that can be run during training and test stages. To fully utilize it, we incorporate an unorthodox scheme of deblurring framework that employs the sequence of blur-deblur-reblur-deblur steps. The reblur step is assisted by a reblurring module (synthesizer) that provides the reblurred version (pseudo-blur) of its sharp or deblurred counterpart. The proposed module is also equipped with hand-crafted prior extracted using the state-of-the-art human body statistical model. This prior is employed to map human and non-human regions during adversarial learning to fully perceive the characteristics of human-articulated and scene motion blurs. By engaging this approach, our deblurring module becomes adaptive and achieves superior outcomes compared to recent state-of-the-art deblurring algorithms.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

3D Body Reconstruction Revisited: Exploring the Test-time 3D Body Mesh Refinement Strategy via Surrogate Adaptation

TL;DR: This work proposes a test-time adaptation strategy that fine-tunes the surrogate 3D body module using reliable virtual data and can revisit the prior state-of-the-art works and improve their performances directly in the test phase.
Journal ArticleDOI

Human from Blur: Human Pose Tracking from Blurry Images

TL;DR: Zhang et al. as mentioned in this paper proposed a method to estimate 3D human poses from substantially blurred images by backpropagating the pixel-wise reprojection error to recover the best human motion representation that explains a single or multiple input images.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Proceedings ArticleDOI

Image-to-Image Translation with Conditional Adversarial Networks

TL;DR: Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Proceedings Article

Model-agnostic meta-learning for fast adaptation of deep networks

TL;DR: An algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning is proposed.
Proceedings Article

Prototypical Networks for Few-shot Learning

TL;DR: Prototypical Networks as discussed by the authors learn a metric space in which classification can be performed by computing distances to prototype representations of each class, and achieve state-of-the-art results on the CU-Birds dataset.
Related Papers (5)