scispace - formally typeset
Search or ask a question
Author

Jonathan Samuel Lumentut

Other affiliations: Binus University
Bio: Jonathan Samuel Lumentut is an academic researcher from Inha University. The author has contributed to research in topics: Deblurring & Motion blur. The author has an hindex of 3, co-authored 8 publications receiving 23 citations. Previous affiliations of Jonathan Samuel Lumentut include Binus University.

Papers
More filters
Journal ArticleDOI
TL;DR: Three background subtraction techniques were used to count the passengers crossing an entrance on a BRT station from a pre-recorded motion picture, and the results indicates that the tree algorithms are able to identify the passenger crossing with a reasonable high level of recall and but low level of precision.

13 citations

Journal ArticleDOI
TL;DR: This work generates a complex blurry light field dataset and proposes a learning-based deblurring approach that is about 16K times faster than Srinivasan et.
Abstract: Restoring a sharp light field image from its blurry input has become essential due to the increasing popularity of parallax-based image processing. State-of-the-art blind light field deblurring methods suffer from several issues such as slow processing, reduced spatial size, and a limited motion blur model. In this work, we address these challenging problems by generating a complex blurry light field dataset and proposing a learning-based deblurring approach. In particular, we model the full 6-degree of freedom (6-DOF) light field camera motion, which is used to create the blurry dataset using a combination of real light fields captured with a Lytro Illum camera, and synthetic light field renderings of 3D scenes. Furthermore, we propose a light field deblurring network that is built with the capability of large receptive fields. We also introduce a simple strategy of angular sampling to train on the large-scale blurry light field effectively. We evaluate our method through both quantitative and qualitative measurements and demonstrate superior performance compared to the state-of-the-art method with a massive speedup in execution time. Our method is about 16K times faster than Srinivasan et. al. [22] and can deblur a full-resolution light field in less than 2 seconds.

12 citations

Journal ArticleDOI
TL;DR: In this paper, a light field recurrent deblurring network was proposed to recover sharp light field from its blurry input. But the deblurability of the deblated light field was not improved.
Abstract: The popularity of parallax-based image processing is increasing while in contrast early works on recovering sharp light field from its blurry input (deblurring) remain stagnant. State-of-the-art blind light field deblurring methods suffer from several problems such as slow processing, reduced spatial size, and simplified motion blur model. In this paper, we solve these challenging problems by proposing a novel light field recurrent deblurring network that is trained under 6 degree-of-freedom camera motion-blur model. By combining the real light field captured using Lytro Illum and synthetic light field rendering of 3D scenes from UnrealCV, we provide a large-scale blurry light field dataset to train the network. The proposed method outperforms the state-of-the-art methods in terms of deblurring quality, the capability of handling full-resolution, and a fast runtime.

8 citations

Proceedings ArticleDOI
01 Jun 2020
TL;DR: This work proposed a framework that utilizes the deep neural net to solve LF spatial super- resolution anddeblurring under 6-DOF camera motion and achieves superior results in terms of quantitative and qualitative performance compared to the recent state-of-the-art LF deblurring and SR algorithms.
Abstract: The recent works on the light field (LF) image enhancement are focused on specific tasks such as motion deblurring and super-resolution. State-of-the-art methods are limited with the specific case of 3-degree-of-freedom (3-DOF) camera motion (for motion deblurring) and straight-forward high-resolution neural network (for super-resolution (SR)). In this work, we proposed a framework that utilizes the deep neural net to solve LF spatial super- resolution and deblurring under 6-DOF camera motion. The neural network is designed with end-to-end fashion and trained in multiple stages to perform robust super-resolution and deblurring. Our neural network achieves superior results in terms of quantitative and qualitative performance compared to the recent state-of-the-art LF deblurring and SR algorithms.

3 citations

Journal ArticleDOI
TL;DR: The experiment results show that the proposed blur model can maintain the parallax information (depth-dependent blur) in a light field image and produce a synthetic blurry light field dataset based on the 6-DOF model.
Abstract: Motion deblurring is essential for reconstructing sharp images from given a blurry input caused by the camera motion. The complexity of this problem increases in a light field due to its depth-dependent blur constraint. A method of generating synthetic 3 degree-of-freedom (3-DOF) translation blur on a light field image without camera rotation has been introduced. In this study, we generate a camera translation and rotation (6-DOF) motion blur model that preserves the consistency of the light field image. Our experiment results show that the proposed blur model can maintain the parallax information (depth-dependent blur) in a light field image. Furthermore, we produce a synthetic blurry light field dataset based on the 6-DOF model. Finally, to validate the usability of the synthetic dataset, we conduct extensive benchmarking using state-of-the-art motion deblurring algorithms.

3 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A comprehensive and timely survey of recently published deep-learning based image deblurring approaches can be found in this article , where the authors discuss common causes of image blur, introduce benchmark datasets and performance metrics, and summarize different problem formulations.
Abstract: Image deblurring is a classic problem in low-level computer vision with the aim to recover a sharp image from a blurred input image. Advances in deep learning have led to significant progress in solving this problem, and a large number of deblurring networks have been proposed. This paper presents a comprehensive and timely survey of recently published deep-learning based image deblurring approaches, aiming to serve the community as a useful literature review. We start by discussing common causes of image blur, introduce benchmark datasets and performance metrics, and summarize different problem formulations. Next, we present a taxonomy of methods using convolutional neural networks (CNN) based on architecture, loss function, and application, offering a detailed review and comparison. In addition, we discuss some domain-specific deblurring applications including face images, text, and stereo image pairs. We conclude by discussing key challenges and future research directions.

65 citations

Journal ArticleDOI
TL;DR: Findings show that adding buses to theBRT line is the best alternative for improving the performance of the BRT line 1 in Tehran.

50 citations

Journal ArticleDOI
01 Nov 2022
TL;DR: A high-quality and challenging urban scene dataset, containing 1074 samples composed of real-world and synthetic light field images as well as pixel-wise annotations for 14 semantic classes, is proposed, believed to be the largest and the most diverse light field dataset for semantic segmentation.
Abstract: As one of the fundamental technologies for scene understanding, semantic segmentation has been widely explored in the last few years. Light field cameras encode the geometric information by simultaneously recording the spatial information and angular information of light rays, which provides us with a new way to solve this issue. In this paper, we propose a high-quality and challenging urban scene dataset, containing 1074 samples composed of real-world and synthetic light field images as well as pixel-wise annotations for 14 semantic classes. To the best of our knowledge, it is the largest and the most diverse light field dataset for semantic segmentation. We further design two new semantic segmentation baselines tailored for light field and compare them with state-of-the-art RGB, video and RGB-D-based methods using the proposed dataset. The outperforming results of our baselines demonstrate the advantages of the geometric information in light field for this task. We also provide evaluations of super-resolution and depth estimation methods, showing that the proposed dataset presents new challenges and supports detailed comparisons among different methods. We expect this work inspires new research direction and stimulates scientific progress in related fields. The complete dataset is available at https://github.com/HAWKEYE-Group/UrbanLF.

37 citations

Journal ArticleDOI
TL;DR: There is no perfect method for all challenging cases; each method performs well in certain cases and fails in others, and this study enables the user to identify the most suitable method for his or her needs.
Abstract: The objective of this study is to compare several change detection methods for a mono static camera and identify the best method for different complex environments and backgrounds in indoor and outdoor scenes. To this end, we used the CDnet video dataset as a benchmark that consists of many challenging problems, ranging from basic simple scenes to complex scenes affected by bad weather and dynamic backgrounds. Twelve change detection methods, ranging from simple temporal differencing to more sophisticated methods, were tested and several performance metrics were used to precisely evaluate the results. Because most of the considered methods have not previously been evaluated on this recent large scale dataset, this work compares these methods to fill a lack in the literature, and thus this evaluation joins as complementary compared with the previous comparative evaluations. Our experimental results show that there is no perfect method for all challenging cases, each method performs well in certain cases and fails in others. However, this study enables the user to identify the most suitable method for his or her needs.

37 citations

Journal ArticleDOI
TL;DR: In this article, the authors compare several change detection methods for a monostatic camera and identify the best method for different complex environments and backgrounds in indoor and outdoor scenes, using the CDnet video dataset as a benchmark.
Abstract: The objective of this study is to compare several change detection methods for a monostatic camera and identify the best method for different complex environments and backgrounds in indoor and outdoor scenes. To this end, we used the CDnet video dataset as a benchmark that consists of many challenging problems, ranging from basic simple scenes to complex scenes affected by bad weather and dynamic backgrounds. Twelve change detection methods, ranging from simple temporal differencing to more sophisticated methods, were tested and several performance metrics were used to precisely evaluate the results. Because most of the considered methods have not previously been evaluated on this recent large scale dataset, this work compares these methods to fill a lack in the literature, and thus this evaluation joins as complementary compared with the previous comparative evaluations. Our experimental results show that there is no perfect method for all challenging cases; each method performs well in certain cases and fails in others. However, this study enables the user to identify the most suitable method for his or her needs.

35 citations