scispace - formally typeset
Search or ask a question

Showing papers by "Wolfgang Heidrich published in 2020"


Proceedings ArticleDOI
14 Jun 2020
TL;DR: A novel rank-1 parameterization of the proposed DOE which avoids vast trainable parameters and keeps high frequencies' encoding compared with conventional end-to-end design methods and improves the PSNR by more than 7 dB over state-of-the-art end- to-end designs.
Abstract: High-dynamic range (HDR) imaging is an essential imaging modality for a wide range of applications in uncontrolled environments, including autonomous driving, robotics, and mobile phone cameras. However, existing HDR techniques in commodity devices struggle with dynamic scenes due to multi-shot acquisition and post-processing time, e.g. mobile phone burst photography, making such approaches unsuitable for real-time applications. In this work, we propose a method for snapshot HDR imaging by learning an optical HDR encoding in a single image which maps saturated highlights into neighboring unsaturated areas using a diffractive optical element (DOE). We propose a novel rank-1 parameterization of the proposed DOE which avoids vast trainable parameters and keeps high frequencies' encoding compared with conventional end-to-end design methods. We further propose a reconstruction network tailored to this rank-1 parametrization for recovery of clipped information from the encoded measurements. The proposed end-to-end framework is validated through simulation and real-world experiments and improves the PSNR by more than 7 dB over state-of-the-art end-to-end designs.

54 citations


Journal ArticleDOI
TL;DR: This work investigates a simple, low-cost, and compact optical coding camera design that supports high-resolution image reconstructions from raw measurements with low pixel counts, and uses an end-to-end framework to simultaneously optimize the optical design and a reconstruction network for obtaining super-resolved images from raw measures.
Abstract: Single Photon Avalanche Photodiodes (SPADs) have recently received a lot of attention in imaging and vision applications due to their excellent performance in low-light conditions, as well as their ultra-high temporal resolution. Unfortunately, like many evolving sensor technologies, image sensors built around SPAD technology currently suffer from a low pixel count. In this work, we investigate a simple, low-cost, and compact optical coding camera design that supports high-resolution image reconstructions from raw measurements with low pixel counts. We demonstrate this approach for regular intensity imaging, depth imaging, as well transient imaging. Our method uses an end-to-end framework to simultaneously optimize the optical design and a reconstruction network for obtaining super-resolved images from raw measurements. The optical design space is that of an engineered point spread function (implemented with diffractive optics), which can be considered an optimized anti-aliasing filter to preserve as much high-resolution information as possible despite imaging with a low pixel count, low fill-factor SPAD array. We further investigate a deep network for reconstruction. The effectiveness of this joint design and reconstruction approach is demonstrated for a range of different applications, including high-speed imaging, and time of flight depth imaging, as well as transient imaging. While our work specifically focuses on low-resolution SPAD sensors, similar approaches should prove effective for other emerging image sensor technologies with low pixel counts and low fill-factors.

40 citations


Proceedings ArticleDOI
24 Apr 2020
TL;DR: A new dense network architecture is developed that embeds Anderson acceleration, known from numerical optimization, directly into the neural network architecture and outperforms other state-of-the-art methods both in simulations and on real hardware.
Abstract: Compressive imaging systems with spatial-temporal encoding can be used to capture and reconstruct fast-moving objects. The imaging quality highly depends on the choice of encoding masks and reconstruction methods. In this paper, we present a new network architecture to jointly design the encoding masks and the reconstruction method for compressive high-frame-rate imaging. Unlike previous works, the proposed method takes full advantage of denoising prior to provide a promising frame reconstruction. The network is also flexible enough to optimize full-resolution masks and efficient at reconstructing frames. To this end, we develop a new dense network architecture that embeds Anderson acceleration, known from numerical optimization, directly into the neural network architecture. Our experiments show the optimized masks and the dense accelerated network respectively achieve 1.5 dB and 1 dB improvements in PSNR without adding training parameters. The proposed method outperforms other state-of-the-art methods both in simulations and on real hardware. In addition, we set up a coded two-bucket camera for compressive high-frame-rate imaging, which is robust to imaging noise and provides promising results when recovering nearly 1,000 frames per second.

33 citations


Book ChapterDOI
23 Aug 2020
TL;DR: A new framework is presented that retrieves dense 3D measurements of the fluid velocity field using a pair of event-based cameras that also includes physically plausible regularizers, in order to retrieve the 3D velocity field.
Abstract: Existing Particle Imaging Velocimetry techniques require the use of high-speed cameras to reconstruct time-resolved fluid flows. These cameras provide high-resolution images at high frame rates, which generates bandwidth and memory issues. By capturing only changes in the brightness with a very low latency and at low data rate, event-based cameras have the ability to tackle such issues. In this paper, we present a new framework that retrieves dense 3D measurements of the fluid velocity field using a pair of event-based cameras. First, we track particles inside the two event sequences in order to estimate their 2D velocity in the two sequences of images. A stereo-matching step is then performed to retrieve their 3D positions. These intermediate outputs are incorporated into an optimization framework that also includes physically plausible regularizers, in order to retrieve the 3D velocity field. Extensive experiments on both simulated and real data demonstrate the efficacy of our approach.

21 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: A state-of-the-art 4D tomographic reconstruction framework that integrates several regularizers into a multi-scale matrix free optimization algorithm and proposes two new regularizers for improved results: a regularizer based on view interpolation of projected images and a regularizers to encourage reprojection consistency.
Abstract: Visible light tomography is a promising and increasingly popular technique for fluid imaging. However, the use of a sparse number of viewpoints in the capturing setups makes the reconstruction of fluid flows very challenging. In this paper, we present a state-of-the-art 4D tomographic reconstruction framework that integrates several regularizers into a multi-scale matrix free optimization algorithm. In addition to existing regularizers, we propose two new regularizers for improved results: a regularizer based on view interpolation of projected images and a regularizer to encourage reprojection consistency. We demonstrate our method with extensive experiments on both simulated and real data.

20 citations


Posted Content
01 Sep 2020
TL;DR: This work proposes a monocular imaging system for simultaneously capturing hyperspectral-depth (HS-D) scene information with an optimized diffractive optical element (DOE) and demonstrates that the optimized DOE outperforms alternative optical designs.
Abstract: To extend the capabilities of spectral imaging, hyperspectral and depth imaging have been combined to capture the higher-dimensional visual information. However, the form factor of the combined imaging systems increases, limiting the applicability of this new technology. In this work, we propose a monocular imaging system for simultaneously capturing hyperspectral-depth (HS-D) scene information with an optimized diffractive optical element (DOE). In the training phase, this DOE is optimized jointly with a convolutional neural network to estimate HS-D data from a snapshot input. To study natural image statistics of this high-dimensional visual data and to enable such a machine learning-based DOE training procedure, we record two HS-D datasets. One is used for end-to-end optimization in deep optical HS-D imaging, and the other is used for enhancing reconstruction performance with a real-DOE prototype. The optimized DOE is fabricated with a grayscale lithography process and inserted into a portable HS-D camera prototype, which is shown to robustly capture HS-D information. In extensive evaluations, we demonstrate that our deep optical imaging system achieves state-of-the-art results for HS-D imaging and that the optimized DOE outperforms alternative optical designs.

18 citations


Posted Content
TL;DR: In this paper, a diffractive optical element (DOE) is used to reconstruct spectrum and depth from a single captured image, and a differentiable simulator and a neural-network-based reconstruction are jointly optimized via automatic differentiation.
Abstract: Imaging depth and spectrum have been extensively studied in isolation from each other for decades. Recently, hyperspectral-depth (HS-D) imaging emerges to capture both information simultaneously by combining two different imaging systems; one for depth, the other for spectrum. While being accurate, this combinational approach induces increased form factor, cost, capture time, and alignment/registration problems. In this work, departing from the combinational principle, we propose a compact single-shot monocular HS-D imaging method. Our method uses a diffractive optical element (DOE), the point spread function of which changes with respect to both depth and spectrum. This enables us to reconstruct spectrum and depth from a single captured image. To this end, we develop a differentiable simulator and a neural-network-based reconstruction that are jointly optimized via automatic differentiation. To facilitate learning the DOE, we present a first HS-D dataset by building a benchtop HS-D imager that acquires high-quality ground truth. We evaluate our method with synthetic and real experiments by building an experimental prototype and achieve state-of-the-art HS-D imaging results.

17 citations


Book ChapterDOI
23 Aug 2020
TL;DR: A novel supervised learning framework for supervised image reflection separation with a polarization-guided raytracing model and loss function design and a polarization sensor that instantaneously captures four linearly polarized photos of the scene in the same image is proposed.
Abstract: Reflection removal from photographs is an important task in computational photography, but also for computer vision tasks that involve imaging through windows and similar settings. Traditionally, the problem is approached as a single reflection removal problem under very controlled scenarios. In this paper we aim to generalize the reflection removal to real-world scenarios with more complicated light interactions. To this end, we propose a simple yet efficient learning framework for supervised image reflection separation with a polarization-guided ray-tracing model and loss function design. Instead of a conventional image sensor, we use a polarization sensor that instantaneously captures four linearly polarized photos of the scene in the same image. Through a combination of a new polarization-guided image formation model and a novel supervised learning framework for the interpretation of a ray-tracing image formation model, a general method is obtained to tackle general image reflection removal problems. We demonstrate our method with extensive experiments on both real and synthetic data and demonstrate the unprecedented quality of image reconstructions.

12 citations


Proceedings ArticleDOI
14 Dec 2020
TL;DR: A new second stage speckle-correction solution for the Gemini Planet Imager (GPI), replacing the instrument calibration unit (CAL) with the Fast Atmospheric Self coherent camera Technique (FAST), a new version of the self-coherent camera (SCC) concept.
Abstract: High-contrast imaging instruments have advanced techniques to improve contrast, but they remain limited by uncorrected stellar speckles, often lacking a “second stage” correction to complement the Adaptive Optics (AO) correction. We are implementing a new second stage speckle-correction solution for the Gemini Planet Imager (GPI), replacing the instrument calibration unit (CAL) with the Fast Atmospheric Self coherent camera Technique (FAST), a new version of the self-coherent camera (SCC) concept. Our proposed upgrade (CAL2.0) will use a common-path interferometer design to enable speckle correction, through post-processing and/or by a feedback loop to the AO deformable mirror. FAST utilizes a new type of coronagraphic mask that will enable, for the first time, speckle correction down to millisecond timescales. The system's main goal is to improve the contrast by up to 100x in a halfdark hole to enable a new regime of science discoveries. Our team has been developing this new technology at the NRC's Extreme Wavefront control for Exoplanet and Adaptive optics Research Topics (NEW EARTH) laboratory over the past several years. The GPI CAL2.0 update is funded (November 2020), and the system’s first light is expected late 2023.

12 citations


Proceedings ArticleDOI
13 Dec 2020
TL;DR: The New Earth Laboratory (NRC Extreme Wavefront control for Exoplanet Adaptive optics Research Topics at Herzberg) has recently been completed at NRC in Victoria, Canada as discussed by the authors.
Abstract: The NEW EARTH Laboratory (NRC Extreme Wavefront control for Exoplanet Adaptive optics Research Topics at Herzberg) has recently been completed at NRC in Victoria. NEW EARTH is the first Canadian test-bed dedicated to high-contrast imaging. The bench optical design allows a wide range of applications that could require turbulent phase screens, segmented pupils, or custom coronagraphic masks. Super-polished off-axis parabolas are implemented to minimize optical aberrations, in addition to a 468-actuator ALPAO deformable mirror and a Shack Hartmann WFS. The laboratory’s immediate goal is to validate the Fast Atmospheric Self-coherent camera Technique (FAST). The first results of this technique obtained in the NEW EARTH laboratory with a Tilt-Gaussian-Vortex focal plane mask, a reflective Lyot stop and Coherent Differential Imaging are encouraging. Future work will be aimed at expanding this technique to broader wavebands in the context of extremely large telescopes and at visible bands for space-based observatories.

9 citations


Journal ArticleDOI
TL;DR: An image formation model for deterministic phase retrieval in propagation-based wavefront sensing is presented, unifying analysis for classical wavefront sensors such as Shack-Hartmann (slopes tracking) and curvature sensors (based on Transport-of-Intensity Equation).
Abstract: We present an image formation model for deterministic phase retrieval in propagation-based wavefront sensing, unifying analysis for classical wavefront sensors such as Shack-Hartmann (slopes tracking) and curvature sensors (based on Transport-of-Intensity Equation). We show how this model generalizes commonly seen formulas, including Transport-of-Intensity Equation, from small distances and beyond. Using this model, we analyze theoretically achievable lateral wavefront resolution in propagation-based deterministic wavefront sensing. Finally, via a prototype masked wavefront sensor, we show simultaneous bright field and phase imaging numerically recovered in real-time from a single-shot measurement.

Proceedings ArticleDOI
TL;DR: In this paper, a physics-based holographic network (PBHolo-Net) is proposed for 3D imaging, which is efficient, stable, and can perform more precise hologram reconstruction.

Proceedings ArticleDOI
17 Aug 2020
TL;DR: An end-to-end learned, optically coded super-resolution SPAD camera, and a hybrid optical-electronic convolutional layer based optimization of optics for image classification are described.
Abstract: Application-domain-specific cameras that combine customized optics with modern image recovery algorithms are of rapidly growing interest, with widespread applications like ultrathin cameras for internet-of-things or drones, as well as computational cameras for microscopy and scientific imaging. Existing approaches of designing imaging optics are either heuristic or use some proxy metric on the point spread function rather than considering the image quality after post-processing. Without a true end-to-end flow of joint optimization, it remains elusive to find an optimal computational camera for a given visual task. Although this joint design concept has been the core idea of computational photography for a long time, but that only nowadays the computational tools are ready to efficiently interpret a true end-to-end imaging process via machine learning advances. We describe the use of diffractive optics to enable lenses not only showing the compact physical appearance, but also flexible and large design degree of freedom. By building a differentiable ray or wave optics simulation model that maps the true source image to the reconstructed one, one can jointly train an optical encoder and electronic decoder. The encoder can be parameterized by the PSF of physical optics, and the decoder a convolutional neural network. By running over a broad set of images and defining domain-specific loss functions, parameters of the optics and image processing algorithms are jointly learned. We describe typical photography applications for extended depth-of-field, large field-of-view, and high-dynamic-range imaging. We also describe the generalization of this joint-design to machine vision and scientific imaging scenarios. To this point, we describe an end-to-end learned, optically coded super-resolution SPAD camera, and a hybrid optical-electronic convolutional layer based optimization of optics for image classification. Additionally, we explore lensless imaging with optimized phase masks for realizing an ultra-thin camera, a high-resolution wavefront sensing, and face detection.

Journal ArticleDOI
TL;DR: In this paper, a joint illumination-deconvolution scheme is designed to overcome diffraction-photons, enabling the acquisition of intensity and depth images, and a proof-of-concept experiment is conducted to demonstrate the viability of the designed scheme.
Abstract: This paper addresses the problem of imaging in the presence of diffraction-photons. Diffraction-photons arise from the low contrast ratio of DMDs (~1000:1), and very much degrade the quality of images captured by SPAD-based systems. Herein, a joint illumination-deconvolution scheme is designed to overcome diffraction-photons, enabling the acquisition of intensity and depth images. Additionally, a proof-of-concept experiment is conducted to demonstrate the viability of the designed scheme. It is shown that by co-designing the illumination and deconvolution phases of imaging, one can substantially overcome diffraction-photons.

Proceedings ArticleDOI
22 Jun 2020
TL;DR: Joint 4D reconstruction methods are broadly applicable to tomography, fluid imaging, and many other imaging problems and allow for significantly improved results especially when the 3D problem is very ill-posed.
Abstract: 4D inverse problems in imaging are historically solved independently per time step. Recent advances in Joint 4D reconstruction allow for significantly improved results especially when the 3D problem is very ill-posed. The methods are broadly applicable to tomography, fluid imaging, and many other imaging problems.